Update README.md

This commit is contained in:
datechnoman 2023-12-12 11:04:52 +00:00
parent 1e25ef86fe
commit 65757f8cc4

View File

@ -1,34 +1,34 @@
<b>Overview:</b>
Use the following set of scripts to extract urls from CommonCrawl WAT files and output to a compressed txt file.
This script stack compromises of two processes running concurrently (One on the URL Extractor Server downloading, processing and zipping the results and the other stack pulling the files via rsync to a destination of your choice.)
The script can be run without the second part but the server will quickly fill if large data sets are being processed (eg; CommonCrawl WARC/WAT Files).
<b>Requirements:</b>
<ul>
<li>Python3</li>
<li>Gzip</li>
<li>Axel</li>
<li>Parallel</li>
</ul>
<b>Pre-Setup Steps:</b>
Before running the scripts, there are some steps required to setup the stack.
A script has been devised to automate the steps.
Run "prerequisites.sh" to setup the stack.
<b>Steps:</b>
1. Run url_extractor.py against the directory of txt.gz files you are wanting to process. The script will ask for you to enter the location of the files, where you want to store the output and the concurrency to run the script at.
2. Once the script has completed running, verify the output of the file.
<b>Notes:</b>
<ul>
<li>None at this time.</li>
<b>Overview:</b>
Use the following set of scripts to extract urls from CommonCrawl WAT files and output to a compressed txt file.
This script stack compromises of two processes running concurrently (One on the URL Extractor Server downloading, processing and zipping the results and the other stack pulling the files via rsync to a destination of your choice.)
The script can be run without the second part but the server will quickly fill if large data sets are being processed (eg; CommonCrawl WARC/WAT Files).
<b>Requirements:</b>
<ul>
<li>Python3</li>
<li>Gzip</li>
<li>Axel</li>
<li>Parallel</li>
</ul>
<b>Pre-Setup Steps:</b>
Before running the scripts, there are some steps required to setup the stack.
A script has been devised to automate the steps.
Run "prerequisites.sh" to setup the stack.
<b>Steps:</b>
1. Run url_extractor.py against the directory of txt.gz files you are wanting to process. The script will ask for you to enter the location of the files, where you want to store the output and the concurrency to run the script at.
2. Once the script has completed running, verify the output of the file.
<b>Notes:</b>
<ul>
<li>A text file with all of the CommonCrawl links named "urls_to_download.txt" is required to be located in /opt/CommonCrawl_URL_Processor.</li>
</ul>