61 lines
1.2 KiB
Markdown
61 lines
1.2 KiB
Markdown
# Recipe Graph
|
|
|
|
## Setup
|
|
Prerequisits
|
|
- Docker compose
|
|
- Python
|
|
|
|
Install python requirements
|
|
```sh
|
|
python -m pip installl -r requirements.txt
|
|
```
|
|
|
|
Start database
|
|
```sh
|
|
docker-compose up
|
|
```
|
|
|
|
Initialize database and recipe sites
|
|
```sh
|
|
python src/db.py
|
|
python src/inser_sites.py data/sites.json
|
|
```
|
|
|
|
## Usage
|
|
### Scrape
|
|
import new recipes
|
|
```sh
|
|
python src/scrape.py <SiteName> -id <RecipeIdentifier>
|
|
```
|
|
To scrape only one recipe.
|
|
|
|
or
|
|
```sh
|
|
python src/scrape.py <SiteName> -a <N>
|
|
```
|
|
To scrape `<N>` recipes
|
|
|
|
By default it will start at id `0` or the greatest value of id alread in the
|
|
database. To start at another value please use both `-id` and `-a`.
|
|
|
|
```
|
|
Scrape a recipe site for recipies
|
|
|
|
positional arguments:
|
|
site Name of site
|
|
|
|
options:
|
|
-h, --help show this help message and exit
|
|
-id ID, --identifier ID
|
|
url of recipe(reletive to base url of site) or commma seperated list
|
|
-a N, --auto N automaticaly generate identifier(must supply number of recipies to scrape)
|
|
-v, --verbose
|
|
```
|
|
|
|
## TODO
|
|
> ☑ automate scraping\
|
|
> ☐ extend importing funcionality to more websites\
|
|
> ☑ extracting quantity and name (via regex)\
|
|
> ☐ create ontology of ingredients
|
|
> ☐ visualization
|