mirror of
https://github.com/KevinMidboe/bulk-downloader-for-reddit.git
synced 2026-01-10 19:25:41 +00:00
Compare commits
34 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
27532408c1 | ||
|
|
32647beee9 | ||
|
|
a67da461d2 | ||
|
|
8c6f593496 | ||
|
|
b60ce8a71e | ||
|
|
49920cc457 | ||
|
|
c70e7c2ebb | ||
|
|
3931dfff54 | ||
|
|
4a8c2377f9 | ||
|
|
8a18a42a9a | ||
|
|
6c2d748fbc | ||
|
|
8c966df105 | ||
|
|
2adf2c0451 | ||
|
|
3e3a2df4d1 | ||
|
|
7548a01019 | ||
|
|
2ab16608d5 | ||
|
|
e15f33b97a | ||
|
|
27211f993c | ||
|
|
87d3b294f7 | ||
|
|
8128378dcd | ||
|
|
cc93aa3012 | ||
|
|
50c4a8d6d7 | ||
|
|
5737904a54 | ||
|
|
f6eba6c5b0 | ||
|
|
41cbb58db3 | ||
|
|
c569124406 | ||
|
|
1a3836a8e1 | ||
|
|
fde6a1fac4 | ||
|
|
6bba2c4dbb | ||
|
|
a078d44236 | ||
|
|
deae0be769 | ||
|
|
3cf0203e6b | ||
|
|
0b31db0e2e | ||
|
|
d3f2b1b08e |
34
README.md
34
README.md
@@ -6,7 +6,7 @@ This program downloads imgur, gfycat and direct image and video links of saved p
|
||||
## What it can do
|
||||
- Can get posts from: frontpage, subreddits, multireddits, redditor's submissions, upvoted and saved posts; search results or just plain reddit links
|
||||
- Sorts posts by hot, top, new and so on
|
||||
- Downloads imgur albums, gfycat links, [self posts](#i-cant-open-the-self-post-files) and any link to a direct image
|
||||
- Downloads imgur albums, gfycat links, [self posts](#how-do-i-open-self-post-files) and any link to a direct image
|
||||
- Skips the existing ones
|
||||
- Puts post titles to file's name
|
||||
- Puts every post to its subreddit's folder
|
||||
@@ -19,12 +19,12 @@ This program downloads imgur, gfycat and direct image and video links of saved p
|
||||
## How it works
|
||||
|
||||
- For **Windows** and **Linux** users, there are executable files to run easily without installing a third party program. But if you are a paranoid like me, you can **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
||||
- In Windows, double click on script.exe file
|
||||
- In Linux, extract files to a folder and open terminal inside it. Type **`./script`**
|
||||
- In Windows, double click on bulk-downloader-for-reddit file
|
||||
- In Linux, extract files to a folder and open terminal inside it. Type **`./bulk-downloader-for-reddit`**
|
||||
|
||||
- **MacOS** users have to **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
||||
|
||||
Script also accepts **command-line arguments**, get further information from **[`python script.py --help`](docs/COMMAND_LINE_ARGUMENTS.md)**
|
||||
Script also accepts **command-line arguments**, get further information from **[`--help`](docs/COMMAND_LINE_ARGUMENTS.md)**
|
||||
|
||||
## Setting up the script
|
||||
Because this is not a commercial app, you need to create an imgur developer app in order API to work.
|
||||
@@ -42,11 +42,33 @@ It should redirect to a page which shows your **imgur_client_id** and **imgur_cl
|
||||
\* Select **OAuth 2 authorization without a callback URL** first then select **Anonymous usage without user authorization** if it says *Authorization callback URL: required*
|
||||
|
||||
## FAQ
|
||||
### I can't open the self post files.
|
||||
### How do I open self post files?
|
||||
- Self posts are held at reddit as styled with markdown. So, the script downloads them as they are in order not to lose their stylings.
|
||||
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
|
||||
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
|
||||
|
||||
However, they are basically text files. You can also view them with any text editor such as Notepad on Windows, gedit on Linux or Text Editor on MacOS
|
||||
|
||||
### How can I change my credentials?
|
||||
- All of the user data is held in **config.json** file which is in a folder named "Bulk Downloader for Reddit" in your **Home** directory. You can edit
|
||||
them, there.
|
||||
|
||||
## Changelog
|
||||
### [22/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/a67da461d2fcd70672effcb20c8179e3224091bb)
|
||||
- Put log files in a folder named "LOG_FILES"
|
||||
- Fixed the bug that makes multireddit mode unusable
|
||||
|
||||
### [21/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/4a8c2377f9fb4d60ed7eeb8d50aaf9a26492462a)
|
||||
- Added exclude mode
|
||||
|
||||
### [20/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/commit/7548a010198fb693841ca03654d2c9bdf5742139)
|
||||
- "0" input for no limit
|
||||
- Fixed the bug that recognizes none image direct links as image links
|
||||
|
||||
### [19/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/41cbb58db34f500a8a5ecc3ac4375bf6c3b275bb)
|
||||
- Added v.redd.it support
|
||||
- Added custom exception descriptions to FAILED.json file
|
||||
- Fixed the bug that prevents downloading some gfycat URLs
|
||||
|
||||
### [13/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/9f831e1b784a770c82252e909462871401a05c11)
|
||||
- Change config.json file's path to home directory
|
||||
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
theme: jekyll-theme-minimal
|
||||
show_downloads: false
|
||||
#title: Bulk Downloader for Reddit
|
||||
description: Code written by Ali PARLAKCI
|
||||
google_analytics: UA-80780721-3
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
See **[compiling from source](COMPILE_FROM_SOURCE.md)** page first unless you are using an executable file. If you are using an executable file, see [using terminal](COMPILE_FROM_SOURCE.md#using-terminal) and come back.
|
||||
|
||||
***Use*** `.\script.exe` ***or*** `./script` ***if you are using the executable***.
|
||||
***Use*** `.\bulk-downloader-for-reddit.exe` ***or*** `./bulk-downloader-for-reddit` ***if you are using the executable***.
|
||||
```console
|
||||
$ python script.py --help
|
||||
usage: script.py [-h] [--directory DIRECTORY] [--link link] [--saved]
|
||||
@@ -23,7 +23,8 @@ optional arguments:
|
||||
--saved Triggers saved mode
|
||||
--submitted Gets posts of --user
|
||||
--upvoted Gets upvoted posts of --user
|
||||
--log LOG FILE Triggers log read mode and takes a log file
|
||||
--log LOG FILE Takes a log file which created by itself (json files),
|
||||
reads posts and tries downloading them again.
|
||||
--subreddit SUBREDDIT [SUBREDDIT ...]
|
||||
Triggers subreddit mode and takes subreddit's name
|
||||
without r/. use "frontpage" for frontpage
|
||||
@@ -39,6 +40,8 @@ optional arguments:
|
||||
all
|
||||
--NoDownload Just gets the posts and store them in a file for
|
||||
downloading later
|
||||
--exclude {imgur,gfycat,direct,self} [{imgur,gfycat,direct,self} ...]
|
||||
Do not download specified links
|
||||
```
|
||||
|
||||
# Examples
|
||||
@@ -50,7 +53,7 @@ python script.py
|
||||
```
|
||||
|
||||
```console
|
||||
.\script.exe
|
||||
.\bulk-downloader-for-reddit.exe
|
||||
```
|
||||
|
||||
```console
|
||||
@@ -58,11 +61,11 @@ python script.py
|
||||
```
|
||||
|
||||
```console
|
||||
.\script.exe -- directory .\\NEW_FOLDER --search cats --sort new --time all --subreddit gifs pics --NoDownload
|
||||
.\bulk-downloader-for-reddit.exe -- directory .\\NEW_FOLDER --search cats --sort new --time all --subreddit gifs pics --NoDownload
|
||||
```
|
||||
|
||||
```console
|
||||
./script --directory .\\NEW_FOLDER\\ANOTHER_FOLDER --saved --limit 1000
|
||||
./bulk-downloader-for-reddit --directory .\\NEW_FOLDER\\ANOTHER_FOLDER --saved --limit 1000
|
||||
```
|
||||
|
||||
```console
|
||||
|
||||
@@ -15,7 +15,7 @@ Latest* version of **Python 3** is needed. See if it is already installed [here]
|
||||
- **On MacOS**: Look for an app called **Terminal**.
|
||||
|
||||
### Navigating to the directory where script is downloaded
|
||||
Go inside the folder where script.py is located. If you are not familier with changing directories on command-prompt and terminal read *Changing Directories* in [this article](https://lifehacker.com/5633909/who-needs-a-mouse-learn-to-use-the-command-line-for-almost-anything)
|
||||
Go inside the folder where script.py is located. If you are not familiar with changing directories on command-prompt and terminal read *Changing Directories* in [this article](https://lifehacker.com/5633909/who-needs-a-mouse-learn-to-use-the-command-line-for-almost-anything)
|
||||
|
||||
## Finding the correct keyword for Python
|
||||
Enter these lines to terminal window until it prints out the version you have downloaded and installed:
|
||||
|
||||
68
script.py
68
script.py
@@ -22,7 +22,7 @@ from src.tools import (GLOBAL, createLogFile, jsonFile, nameCorrector,
|
||||
|
||||
__author__ = "Ali Parlakci"
|
||||
__license__ = "GPL"
|
||||
__version__ = "1.1.2"
|
||||
__version__ = "1.3.1"
|
||||
__maintainer__ = "Ali Parlakci"
|
||||
__email__ = "parlakciali@gmail.com"
|
||||
|
||||
@@ -143,6 +143,12 @@ def parseArguments(arguments=[]):
|
||||
" for downloading later",
|
||||
action="store_true",
|
||||
default=False)
|
||||
|
||||
parser.add_argument("--exclude",
|
||||
nargs="+",
|
||||
help="Do not download specified links",
|
||||
choices=["imgur","gfycat","direct","self"],
|
||||
type=str)
|
||||
|
||||
if arguments == []:
|
||||
return parser.parse_args()
|
||||
@@ -159,7 +165,10 @@ def checkConflicts():
|
||||
else:
|
||||
user = 1
|
||||
|
||||
modes = ["saved","subreddit","submitted","search","log","link","upvoted"]
|
||||
modes = [
|
||||
"saved","subreddit","submitted","search","log","link","upvoted",
|
||||
"multireddit"
|
||||
]
|
||||
|
||||
values = {
|
||||
x: 0 if getattr(GLOBAL.arguments,x) is None or \
|
||||
@@ -246,7 +255,6 @@ class PromptUser:
|
||||
# DELETE THE PLUS (+) AT THE END
|
||||
GLOBAL.arguments.subreddit = GLOBAL.arguments.subreddit[:-1]
|
||||
|
||||
print(GLOBAL.arguments.subreddit)
|
||||
print("\nselect sort type:")
|
||||
sortTypes = [
|
||||
"hot","top","new","rising","controversial"
|
||||
@@ -266,7 +274,7 @@ class PromptUser:
|
||||
|
||||
elif programMode == "multireddit":
|
||||
GLOBAL.arguments.user = input("\nredditor: ")
|
||||
GLOBAL.arguments.subreddit = input("\nmultireddit: ")
|
||||
GLOBAL.arguments.multireddit = input("\nmultireddit: ")
|
||||
|
||||
print("\nselect sort type:")
|
||||
sortTypes = [
|
||||
@@ -319,9 +327,37 @@ class PromptUser:
|
||||
if Path(GLOBAL.arguments.log ).is_file():
|
||||
break
|
||||
|
||||
GLOBAL.arguments.exclude = []
|
||||
|
||||
sites = ["imgur","gfycat","direct","self"]
|
||||
|
||||
excludeInput = input("exclude: ").lower()
|
||||
if excludeInput in sites and excludeInput != "":
|
||||
GLOBAL.arguments.exclude = [excludeInput]
|
||||
|
||||
while not excludeInput == "":
|
||||
while True:
|
||||
excludeInput = input("exclude: ").lower()
|
||||
if not excludeInput in sites or excludeInput in GLOBAL.arguments.exclude:
|
||||
break
|
||||
elif excludeInput == "":
|
||||
break
|
||||
else:
|
||||
GLOBAL.arguments.exclude.append(excludeInput)
|
||||
|
||||
for i in range(len(GLOBAL.arguments.exclude)):
|
||||
if " " in GLOBAL.arguments.exclude[i]:
|
||||
inputWithWhitespace = GLOBAL.arguments.exclude[i]
|
||||
del GLOBAL.arguments.exclude[i]
|
||||
for siteInput in inputWithWhitespace.split():
|
||||
if siteInput in sites and siteInput not in GLOBAL.arguments.exclude:
|
||||
GLOBAL.arguments.exclude.append(siteInput)
|
||||
|
||||
while True:
|
||||
try:
|
||||
GLOBAL.arguments.limit = int(input("\nlimit: "))
|
||||
GLOBAL.arguments.limit = int(input("\nlimit (0 for none): "))
|
||||
if GLOBAL.arguments.limit == 0:
|
||||
GLOBAL.arguments.limit = None
|
||||
break
|
||||
except ValueError:
|
||||
pass
|
||||
@@ -376,6 +412,9 @@ def prepareAttributes():
|
||||
|
||||
ATTRIBUTES["subreddit"] = GLOBAL.arguments.subreddit
|
||||
|
||||
elif GLOBAL.arguments.multireddit is not None:
|
||||
ATTRIBUTES["multireddit"] = GLOBAL.arguments.multireddit
|
||||
|
||||
elif GLOBAL.arguments.saved is True:
|
||||
ATTRIBUTES["saved"] = True
|
||||
|
||||
@@ -443,6 +482,10 @@ def download(submissions):
|
||||
downloadedCount = subsLenght
|
||||
duplicates = 0
|
||||
BACKUP = {}
|
||||
if GLOBAL.arguments.exclude is not None:
|
||||
ToBeDownloaded = GLOBAL.arguments.exclude
|
||||
else:
|
||||
ToBeDownloaded = []
|
||||
|
||||
FAILED_FILE = createLogFile("FAILED")
|
||||
|
||||
@@ -465,7 +508,7 @@ def download(submissions):
|
||||
|
||||
directory = GLOBAL.directory / submissions[i]['postSubreddit']
|
||||
|
||||
if submissions[i]['postType'] == 'imgur':
|
||||
if submissions[i]['postType'] == 'imgur' and not 'imgur' in ToBeDownloaded:
|
||||
print("IMGUR",end="")
|
||||
|
||||
while int(time.time() - lastRequestTime) <= 2:
|
||||
@@ -528,7 +571,7 @@ def download(submissions):
|
||||
)
|
||||
downloadedCount -= 1
|
||||
|
||||
elif submissions[i]['postType'] == 'gfycat':
|
||||
elif submissions[i]['postType'] == 'gfycat' and not 'gfycat' in ToBeDownloaded:
|
||||
print("GFYCAT")
|
||||
try:
|
||||
Gfycat(directory,submissions[i])
|
||||
@@ -539,7 +582,7 @@ def download(submissions):
|
||||
downloadedCount -= 1
|
||||
|
||||
except NotADownloadableLinkError as exception:
|
||||
print("Could not read the page source")
|
||||
print(exception)
|
||||
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
||||
downloadedCount -= 1
|
||||
|
||||
@@ -548,7 +591,7 @@ def download(submissions):
|
||||
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
||||
downloadedCount -= 1
|
||||
|
||||
elif submissions[i]['postType'] == 'direct':
|
||||
elif submissions[i]['postType'] == 'direct' and not 'direct' in ToBeDownloaded:
|
||||
print("DIRECT")
|
||||
try:
|
||||
Direct(directory,submissions[i])
|
||||
@@ -563,7 +606,7 @@ def download(submissions):
|
||||
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
||||
downloadedCount -= 1
|
||||
|
||||
elif submissions[i]['postType'] == 'self':
|
||||
elif submissions[i]['postType'] == 'self' and not 'self' in ToBeDownloaded:
|
||||
print("SELF")
|
||||
try:
|
||||
Self(directory,submissions[i])
|
||||
@@ -667,7 +710,10 @@ if __name__ == "__main__":
|
||||
GLOBAL.directory = Path(".\\")
|
||||
print("\nQUITTING...")
|
||||
except Exception as exception:
|
||||
logging.error("Runtime error!", exc_info=full_exc_info(sys.exc_info()))
|
||||
if GLOBAL.directory is None:
|
||||
GLOBAL.directory = Path(".\\")
|
||||
logging.error(sys.exc_info()[0].__name__,
|
||||
exc_info=full_exc_info(sys.exc_info()))
|
||||
print(log_stream.getvalue())
|
||||
|
||||
input("Press enter to quit\n")
|
||||
|
||||
@@ -36,7 +36,10 @@ def getExtension(link):
|
||||
if TYPE in parsed:
|
||||
return "."+parsed[-1]
|
||||
else:
|
||||
return '.jpg'
|
||||
if not "v.redd.it" in link:
|
||||
return '.jpg'
|
||||
else:
|
||||
return '.mp4'
|
||||
|
||||
def getFile(fileDir,tempDir,imageURL,indent=0):
|
||||
"""Downloads given file to given directory.
|
||||
@@ -169,7 +172,9 @@ class Imgur:
|
||||
if duplicates == imagesLenght:
|
||||
raise FileAlreadyExistsError
|
||||
elif howManyDownloaded < imagesLenght:
|
||||
raise AlbumNotDownloadedCompletely
|
||||
raise AlbumNotDownloadedCompletely(
|
||||
"Album Not Downloaded Completely"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def initImgur():
|
||||
@@ -217,9 +222,9 @@ class Gfycat:
|
||||
try:
|
||||
POST['mediaURL'] = self.getLink(POST['postURL'])
|
||||
except IndexError:
|
||||
raise NotADownloadableLinkError
|
||||
raise NotADownloadableLinkError("Could not read the page source")
|
||||
except Exception as exception:
|
||||
raise NotADownloadableLinkError
|
||||
raise NotADownloadableLinkError("Could not read the page source")
|
||||
|
||||
POST['postExt'] = getExtension(POST['mediaURL'])
|
||||
|
||||
@@ -248,8 +253,7 @@ class Gfycat:
|
||||
if url[-1:] == '/':
|
||||
url = url[:-1]
|
||||
|
||||
if 'gifs' in url:
|
||||
url = "https://gfycat.com/" + url.split('/')[-1]
|
||||
url = "https://gfycat.com/" + url.split('/')[-1]
|
||||
|
||||
pageSource = (urllib.request.urlopen(url).read().decode().split('\n'))
|
||||
|
||||
@@ -266,7 +270,7 @@ class Gfycat:
|
||||
break
|
||||
|
||||
if "".join(link) == "":
|
||||
raise NotADownloadableLinkError
|
||||
raise NotADownloadableLinkError("Could not read the page source")
|
||||
|
||||
return "".join(link)
|
||||
|
||||
|
||||
@@ -397,8 +397,9 @@ def checkIfMatching(submission):
|
||||
imgurCount += 1
|
||||
return details
|
||||
|
||||
elif isDirectLink(submission.url):
|
||||
elif isDirectLink(submission.url) is not False:
|
||||
details['postType'] = 'direct'
|
||||
details['postURL'] = isDirectLink(submission.url)
|
||||
directCount += 1
|
||||
return details
|
||||
|
||||
@@ -435,7 +436,7 @@ def printSubmission(SUB,validNumber,totalNumber):
|
||||
|
||||
def isDirectLink(URL):
|
||||
"""Check if link is a direct image link.
|
||||
If so, return True,
|
||||
If so, return URL,
|
||||
if not, return False
|
||||
"""
|
||||
|
||||
@@ -444,10 +445,13 @@ def isDirectLink(URL):
|
||||
URL = URL[:-1]
|
||||
|
||||
if "i.reddituploads.com" in URL:
|
||||
return True
|
||||
return URL
|
||||
|
||||
elif "v.redd.it" in URL:
|
||||
return URL+"/DASH_600_K"
|
||||
|
||||
for extension in imageTypes:
|
||||
if extension in URL:
|
||||
return True
|
||||
return URL
|
||||
else:
|
||||
return False
|
||||
|
||||
@@ -75,8 +75,10 @@ def createLogFile(TITLE):
|
||||
put given arguments inside \"HEADER\" key
|
||||
"""
|
||||
|
||||
folderDirectory = GLOBAL.directory / str(time.strftime("%d-%m-%Y_%H-%M-%S",
|
||||
time.localtime(GLOBAL.RUN_TIME)))
|
||||
folderDirectory = GLOBAL.directory / "LOG_FILES" / \
|
||||
str(time.strftime(
|
||||
"%d-%m-%Y_%H-%M-%S",time.localtime(GLOBAL.RUN_TIME)
|
||||
))
|
||||
logFilename = TITLE.upper()+'.json'
|
||||
|
||||
if not path.exists(folderDirectory):
|
||||
@@ -95,7 +97,7 @@ def printToFile(*args, **kwargs):
|
||||
|
||||
TIME = str(time.strftime("%d-%m-%Y_%H-%M-%S",
|
||||
time.localtime(GLOBAL.RUN_TIME)))
|
||||
folderDirectory = GLOBAL.directory / TIME
|
||||
folderDirectory = GLOBAL.directory / "LOG_FILES" / TIME
|
||||
print(*args,**kwargs)
|
||||
|
||||
if not path.exists(folderDirectory):
|
||||
|
||||
Reference in New Issue
Block a user