Go to file
2023-12-12 09:57:48 +00:00
blogger_remove_img_lines.py Upload files to "/" 2023-12-12 09:11:22 +00:00
blogger_url_cleaner.py Upload files to "/" 2023-12-12 09:11:22 +00:00
README.md Upload files to "/" 2023-12-12 09:57:48 +00:00

Overview:

Use the following scripts to extract urls from .txt.gz files and output to a txt file.

Depending on the types of URL's that are being processed you will either need to only use "blogger_url_clearner.py" (plainly extract the urls from a file) or also use "blogger_remove_img_lines.py" which will read the txt file and output all lines that do not contain "jpg|png|gif|jpeg"

Requirements:

  • Python3

Steps:

  1. Git Clone the Repository.
  2. Run blogger_url_cleaner.py against the directory of txt.gz files you are wanting to process. The script will ask for you to enter the location of the files, where you want to store the output and the concurrency to run the script at.
  3. Once the script has completed running, verify the output and if there are many blogger image links run blogger_remove_img_lines.py.
  4. Run blogger_remove_img_lines.py against the directory containing the newly created output from step 2. The script will ask for you to enter the location of the files, where you want to store the output and the concurrency to run the script at.

Notes:

  • The script is hardset to stream the txt files from the location and process it line by line migitating the need for large files to be loaded into RAM.
  • Running the script over the network is fine and performance appears to not be impacted. When running against CommonCrawl WAT files it has been identified that a 1Gbit link can be saturated with 12 concurrent processes on an i7. The CPU still has additional capacity.