docs: update README to simplify usage instructions and remove outdated content
This commit is contained in:
parent
cc79bb9d9d
commit
4bb6619911
49
README.md
49
README.md
@ -2,23 +2,20 @@
|
||||
|
||||
## Description
|
||||
|
||||
This project is a web scraper designed to extract data from websites. It can be customized to scrape various types of data and save it in different formats.
|
||||
This project is a web scraper designed to extract data from websites.
|
||||
|
||||
## Features
|
||||
|
||||
- Extracts data from web pages
|
||||
<!-- - Supports multiple data formats (CSV, JSON, etc.)
|
||||
- Customizable scraping rules
|
||||
- Error handling and logging -->
|
||||
☑️ Extracts data from web pages
|
||||
|
||||
## Installation
|
||||
## Usage
|
||||
|
||||
### Using Docker
|
||||
### With Docker
|
||||
|
||||
1. Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://git.wmi.amu.edu.pl/s500042/webscraper
|
||||
git clone https://git.wmi.amu.edu.pl/s500042/webscraper
|
||||
```
|
||||
|
||||
2. Navigate to the project directory:
|
||||
@ -27,20 +24,13 @@ git clone https://git.wmi.amu.edu.pl/s500042/webscraper
|
||||
cd webscraper
|
||||
```
|
||||
|
||||
3. Build the Docker image and run it using script:
|
||||
- On Linux, ?Mac <!-- I haven't tested it yet -->
|
||||
|
||||
```bash
|
||||
./start.sh
|
||||
```
|
||||
|
||||
- Windows 🤡
|
||||
3. Build the Docker image and run it using `start.py` script:
|
||||
|
||||
```bash
|
||||
python start.py
|
||||
```
|
||||
|
||||
This one will work just fine on Linux, but on Mac, you'll have to use
|
||||
On Mac, you'll have to use
|
||||
|
||||
```bash
|
||||
python3 start.py
|
||||
@ -51,31 +41,28 @@ python3 start.py
|
||||
1. Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/yourusername/webscraper.git
|
||||
git clone https://github.com/yourusername/webscraper.git
|
||||
```
|
||||
|
||||
2. Navigate to the project directory:
|
||||
2. Install the required dependencies:
|
||||
|
||||
```bash
|
||||
cd webscraper/app
|
||||
```
|
||||
|
||||
3. Install the required dependencies:
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
If you're on Arch Linux, you'll need to create a virtual environment.
|
||||
Here's is a [Step by step guide](#) that will help you create it.
|
||||
|
||||
## Usage
|
||||
|
||||
1. Configure the scraper by editing the `config.json` file.
|
||||
2. Run the scraper:
|
||||
3. Run `run-with-no-docker.py` script:
|
||||
|
||||
```bash
|
||||
python scraper.py
|
||||
python run-with-no-docker.py
|
||||
```
|
||||
|
||||
On Mac you'll, need to use:
|
||||
|
||||
```bash
|
||||
python3 run-with-no-dcoker.py
|
||||
```
|
||||
|
||||
## License
|
||||
|
Loading…
Reference in New Issue
Block a user