From 4bb661991102e4ad897fca5783d2843d1f57f80d Mon Sep 17 00:00:00 2001 From: paprykdev <58005447+paprykdev@users.noreply.github.com> Date: Fri, 15 Nov 2024 17:13:29 +0100 Subject: [PATCH] docs: update README to simplify usage instructions and remove outdated content --- README.md | 49 ++++++++++++++++++------------------------------- 1 file changed, 18 insertions(+), 31 deletions(-) diff --git a/README.md b/README.md index 2e1e1a6..28a5d7d 100644 --- a/README.md +++ b/README.md @@ -2,23 +2,20 @@ ## Description -This project is a web scraper designed to extract data from websites. It can be customized to scrape various types of data and save it in different formats. +This project is a web scraper designed to extract data from websites. ## Features -- Extracts data from web pages - +☑️ Extracts data from web pages -## Installation +## Usage -### Using Docker +### With Docker 1. Clone the repository: ```bash -git clone https://git.wmi.amu.edu.pl/s500042/webscraper +git clone https://git.wmi.amu.edu.pl/s500042/webscraper ``` 2. Navigate to the project directory: @@ -27,20 +24,13 @@ git clone https://git.wmi.amu.edu.pl/s500042/webscraper cd webscraper ``` -3. Build the Docker image and run it using script: - - On Linux, ?Mac - -```bash -./start.sh -``` - -- Windows 🤡 +3. Build the Docker image and run it using `start.py` script: ```bash python start.py ``` -This one will work just fine on Linux, but on Mac, you'll have to use +On Mac, you'll have to use ```bash python3 start.py @@ -51,31 +41,28 @@ python3 start.py 1. Clone the repository: ```bash -git clone https://github.com/yourusername/webscraper.git +git clone https://github.com/yourusername/webscraper.git ``` -2. Navigate to the project directory: +2. Install the required dependencies: ```bash -cd webscraper/app -``` - -3. Install the required dependencies: - -```bash -pip install -r requirements.txt +pip install -r requirements.txt ``` If you're on Arch Linux, you'll need to create a virtual environment. Here's is a [Step by step guide](#) that will help you create it. -## Usage - -1. Configure the scraper by editing the `config.json` file. -2. Run the scraper: +3. Run `run-with-no-docker.py` script: ```bash -python scraper.py +python run-with-no-docker.py +``` + +On Mac you'll, need to use: + +```bash +python3 run-with-no-dcoker.py ``` ## License