Upload files to "/"

This commit is contained in:
datechnoman 2023-12-12 09:56:59 +00:00
parent 72c85f86c6
commit ab9b99ccf0

View File

@ -11,10 +11,14 @@ Depending on the types of URL's that are being processed you will either need to
<b>Steps:</b> <b>Steps:</b>
1. Git Clone the Repository 1. Git Clone the Repository.
2. Run blogger_url_cleaner.py against the directory of txt.gz files you are wanting to process. The script will ask for you to enter the location of the files, where you want to store the output and the concurrency to run the script at. 2. Run blogger_url_cleaner.py against the directory of txt.gz files you are wanting to process. The script will ask for you to enter the location of the files, where you want to store the output and the concurrency to run the script at.
3. 3. Once the script has completed running, verify the output and if there are many blogger image links run blogger_remove_img_lines.py.
4. Run blogger_remove_img_lines.py against the directory containing the newly created output from step 2. The script will ask for you to enter the location of the files, where you want to store the output and the concurrency to run the script at.
<b>Notes:</b> <b>Notes:</b>
The script is hardset to stream the test files from the location and process it line by line migitating the need for large files to be loaded into RAM. <ul>
<li>The script is hardset to stream the txt files from the location and process it line by line migitating the need for large files to be loaded into RAM.</li>
<li>Running the script over the network is fine and performance appears to not be impacted. When running against CommonCrawl WAT files it has been identified that a 1Gbit link can be saturated with 12 concurrent processes on an i7. The CPU still has additional capacity.</li>
</ul>