mirror of
https://github.com/KevinMidboe/bulk-downloader-for-reddit.git
synced 2026-01-21 08:36:18 +00:00
Compare commits
36 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
49920cc457 | ||
|
|
c70e7c2ebb | ||
|
|
3931dfff54 | ||
|
|
4a8c2377f9 | ||
|
|
8a18a42a9a | ||
|
|
6c2d748fbc | ||
|
|
8c966df105 | ||
|
|
2adf2c0451 | ||
|
|
3e3a2df4d1 | ||
|
|
7548a01019 | ||
|
|
2ab16608d5 | ||
|
|
e15f33b97a | ||
|
|
27211f993c | ||
|
|
87d3b294f7 | ||
|
|
8128378dcd | ||
|
|
cc93aa3012 | ||
|
|
50c4a8d6d7 | ||
|
|
5737904a54 | ||
|
|
f6eba6c5b0 | ||
|
|
41cbb58db3 | ||
|
|
c569124406 | ||
|
|
1a3836a8e1 | ||
|
|
fde6a1fac4 | ||
|
|
6bba2c4dbb | ||
|
|
a078d44236 | ||
|
|
deae0be769 | ||
|
|
3cf0203e6b | ||
|
|
0b31db0e2e | ||
|
|
d3f2b1b08e | ||
|
|
0ec4bb3008 | ||
|
|
0dbe2ed917 | ||
|
|
9f831e1b78 | ||
|
|
59012077e1 | ||
|
|
5e3c79160b | ||
|
|
1e8eaa1a8d | ||
|
|
7dbc83fdce |
7
.gitignore
vendored
7
.gitignore
vendored
@@ -1,4 +1,5 @@
|
|||||||
|
build/
|
||||||
|
dist/
|
||||||
|
MANIFEST
|
||||||
__pycache__/
|
__pycache__/
|
||||||
src/__pycache__/
|
src/__pycache__/
|
||||||
logs/
|
|
||||||
*.json
|
|
||||||
36
README.md
36
README.md
@@ -6,7 +6,7 @@ This program downloads imgur, gfycat and direct image and video links of saved p
|
|||||||
## What it can do
|
## What it can do
|
||||||
- Can get posts from: frontpage, subreddits, multireddits, redditor's submissions, upvoted and saved posts; search results or just plain reddit links
|
- Can get posts from: frontpage, subreddits, multireddits, redditor's submissions, upvoted and saved posts; search results or just plain reddit links
|
||||||
- Sorts posts by hot, top, new and so on
|
- Sorts posts by hot, top, new and so on
|
||||||
- Downloads imgur albums, gfycat links, [self posts](#i-cant-open-the-self-post-files) and any link to a direct image
|
- Downloads imgur albums, gfycat links, [self posts](#how-do-i-open-self-post-files) and any link to a direct image
|
||||||
- Skips the existing ones
|
- Skips the existing ones
|
||||||
- Puts post titles to file's name
|
- Puts post titles to file's name
|
||||||
- Puts every post to its subreddit's folder
|
- Puts every post to its subreddit's folder
|
||||||
@@ -19,12 +19,12 @@ This program downloads imgur, gfycat and direct image and video links of saved p
|
|||||||
## How it works
|
## How it works
|
||||||
|
|
||||||
- For **Windows** and **Linux** users, there are executable files to run easily without installing a third party program. But if you are a paranoid like me, you can **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
- For **Windows** and **Linux** users, there are executable files to run easily without installing a third party program. But if you are a paranoid like me, you can **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
||||||
- In Windows, double click on script.exe file
|
- In Windows, double click on bulk-downloader-for-reddit file
|
||||||
- In Linux, extract files to a folder and open terminal inside it. Type **`./script`**
|
- In Linux, extract files to a folder and open terminal inside it. Type **`./bulk-downloader-for-reddit`**
|
||||||
|
|
||||||
- **MacOS** users have to **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
- **MacOS** users have to **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
||||||
|
|
||||||
Script also accepts **command-line arguments**, get further information from **[`python script.py --help`](docs/COMMAND_LINE_ARGUMENTS.md)**
|
Script also accepts **command-line arguments**, get further information from **[`--help`](docs/COMMAND_LINE_ARGUMENTS.md)**
|
||||||
|
|
||||||
## Setting up the script
|
## Setting up the script
|
||||||
Because this is not a commercial app, you need to create an imgur developer app in order API to work.
|
Because this is not a commercial app, you need to create an imgur developer app in order API to work.
|
||||||
@@ -42,14 +42,36 @@ It should redirect to a page which shows your **imgur_client_id** and **imgur_cl
|
|||||||
\* Select **OAuth 2 authorization without a callback URL** first then select **Anonymous usage without user authorization** if it says *Authorization callback URL: required*
|
\* Select **OAuth 2 authorization without a callback URL** first then select **Anonymous usage without user authorization** if it says *Authorization callback URL: required*
|
||||||
|
|
||||||
## FAQ
|
## FAQ
|
||||||
### I can't open the self post files.
|
### How do I open self post files?
|
||||||
- Self posts are held at reddit as styled with markdown. So, the script downloads them as they are in order not to lose their stylings.
|
- Self posts are held at reddit as styled with markdown. So, the script downloads them as they are in order not to lose their stylings.
|
||||||
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
|
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
|
||||||
|
|
||||||
|
However, they are basically text files. You can also view them with any text editor such as Notepad on Windows, gedit on Linux or Text Editor on MacOS
|
||||||
|
|
||||||
|
### How can I change my credentials?
|
||||||
|
- All of the user data is held in **config.json** file which is in a folder named "Bulk Downloader for Reddit" in your **Home** directory. You can edit
|
||||||
|
them, there.
|
||||||
|
|
||||||
## Changelog
|
## Changelog
|
||||||
### [12/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/dd671fd7380d6b9bc7610df75e82b8a21c6eb4e9)
|
### [21/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/4a8c2377f9fb4d60ed7eeb8d50aaf9a26492462a)
|
||||||
|
- Added exclude mode
|
||||||
|
|
||||||
|
### [20/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/commit/7548a010198fb693841ca03654d2c9bdf5742139)
|
||||||
|
- "0" input for no limit
|
||||||
|
- Fixed the bug that recognizes none image direct links as image links
|
||||||
|
|
||||||
|
### [19/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/41cbb58db34f500a8a5ecc3ac4375bf6c3b275bb)
|
||||||
|
- Added v.redd.it support
|
||||||
|
- Added custom exception descriptions to FAILED.json file
|
||||||
|
- Fixed the bug that prevents downloading some gfycat URLs
|
||||||
|
|
||||||
|
### [13/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/9f831e1b784a770c82252e909462871401a05c11)
|
||||||
|
- Change config.json file's path to home directory
|
||||||
|
|
||||||
|
### [12/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/50a77f6ba54c24f5647d5ea4e177400b71ff04a7)
|
||||||
- Added binaries for Windows and Linux
|
- Added binaries for Windows and Linux
|
||||||
- Wait on KeyboardInterrupt
|
- Wait on KeyboardInterrupt
|
||||||
|
- Accept multiple subreddit input
|
||||||
- Fixed the bug that prevents choosing "[0] exit" with typing "exit"
|
- Fixed the bug that prevents choosing "[0] exit" with typing "exit"
|
||||||
|
|
||||||
### [11/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/a28a7776ab826dea2a8d93873a94cd46db3a339b)
|
### [11/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/a28a7776ab826dea2a8d93873a94cd46db3a339b)
|
||||||
|
|||||||
@@ -1,5 +0,0 @@
|
|||||||
theme: jekyll-theme-minimal
|
|
||||||
show_downloads: false
|
|
||||||
#title: Bulk Downloader for Reddit
|
|
||||||
description: Code written by Ali PARLAKCI
|
|
||||||
google_analytics: UA-80780721-3
|
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
See **[compiling from source](COMPILE_FROM_SOURCE.md)** page first unless you are using an executable file. If you are using an executable file, see [using terminal](COMPILE_FROM_SOURCE.md#using-terminal) and come back.
|
See **[compiling from source](COMPILE_FROM_SOURCE.md)** page first unless you are using an executable file. If you are using an executable file, see [using terminal](COMPILE_FROM_SOURCE.md#using-terminal) and come back.
|
||||||
|
|
||||||
***Use*** `.\script.exe` ***or*** `./script` ***if you are using the executable***.
|
***Use*** `.\bulk-downloader-for-reddit.exe` ***or*** `./bulk-downloader-for-reddit` ***if you are using the executable***.
|
||||||
```console
|
```console
|
||||||
$ python script.py --help
|
$ python script.py --help
|
||||||
usage: script.py [-h] [--directory DIRECTORY] [--link link] [--saved]
|
usage: script.py [-h] [--directory DIRECTORY] [--link link] [--saved]
|
||||||
@@ -23,7 +23,8 @@ optional arguments:
|
|||||||
--saved Triggers saved mode
|
--saved Triggers saved mode
|
||||||
--submitted Gets posts of --user
|
--submitted Gets posts of --user
|
||||||
--upvoted Gets upvoted posts of --user
|
--upvoted Gets upvoted posts of --user
|
||||||
--log LOG FILE Triggers log read mode and takes a log file
|
--log LOG FILE Takes a log file which created by itself (json files),
|
||||||
|
reads posts and tries downloading them again.
|
||||||
--subreddit SUBREDDIT [SUBREDDIT ...]
|
--subreddit SUBREDDIT [SUBREDDIT ...]
|
||||||
Triggers subreddit mode and takes subreddit's name
|
Triggers subreddit mode and takes subreddit's name
|
||||||
without r/. use "frontpage" for frontpage
|
without r/. use "frontpage" for frontpage
|
||||||
@@ -39,6 +40,8 @@ optional arguments:
|
|||||||
all
|
all
|
||||||
--NoDownload Just gets the posts and store them in a file for
|
--NoDownload Just gets the posts and store them in a file for
|
||||||
downloading later
|
downloading later
|
||||||
|
--exclude {imgur,gfycat,direct,self} [{imgur,gfycat,direct,self} ...]
|
||||||
|
Do not download specified links
|
||||||
```
|
```
|
||||||
|
|
||||||
# Examples
|
# Examples
|
||||||
@@ -50,7 +53,7 @@ python script.py
|
|||||||
```
|
```
|
||||||
|
|
||||||
```console
|
```console
|
||||||
.\script.exe
|
.\bulk-downloader-for-reddit.exe
|
||||||
```
|
```
|
||||||
|
|
||||||
```console
|
```console
|
||||||
@@ -58,11 +61,11 @@ python script.py
|
|||||||
```
|
```
|
||||||
|
|
||||||
```console
|
```console
|
||||||
.\script.exe -- directory .\\NEW_FOLDER --search cats --sort new --time all --subreddit gifs pics --NoDownload
|
.\bulk-downloader-for-reddit.exe -- directory .\\NEW_FOLDER --search cats --sort new --time all --subreddit gifs pics --NoDownload
|
||||||
```
|
```
|
||||||
|
|
||||||
```console
|
```console
|
||||||
./script --directory .\\NEW_FOLDER\\ANOTHER_FOLDER --saved --limit 1000
|
./bulk-downloader-for-reddit --directory .\\NEW_FOLDER\\ANOTHER_FOLDER --saved --limit 1000
|
||||||
```
|
```
|
||||||
|
|
||||||
```console
|
```console
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ Latest* version of **Python 3** is needed. See if it is already installed [here]
|
|||||||
- **On MacOS**: Look for an app called **Terminal**.
|
- **On MacOS**: Look for an app called **Terminal**.
|
||||||
|
|
||||||
### Navigating to the directory where script is downloaded
|
### Navigating to the directory where script is downloaded
|
||||||
Go inside the folder where script.py is located. If you are not familier with changing directories on command-prompt and terminal read *Changing Directories* in [this article](https://lifehacker.com/5633909/who-needs-a-mouse-learn-to-use-the-command-line-for-almost-anything)
|
Go inside the folder where script.py is located. If you are not familiar with changing directories on command-prompt and terminal read *Changing Directories* in [this article](https://lifehacker.com/5633909/who-needs-a-mouse-learn-to-use-the-command-line-for-almost-anything)
|
||||||
|
|
||||||
## Finding the correct keyword for Python
|
## Finding the correct keyword for Python
|
||||||
Enter these lines to terminal window until it prints out the version you have downloaded and installed:
|
Enter these lines to terminal window until it prints out the version you have downloaded and installed:
|
||||||
|
|||||||
63
script.py
63
script.py
@@ -22,7 +22,7 @@ from src.tools import (GLOBAL, createLogFile, jsonFile, nameCorrector,
|
|||||||
|
|
||||||
__author__ = "Ali Parlakci"
|
__author__ = "Ali Parlakci"
|
||||||
__license__ = "GPL"
|
__license__ = "GPL"
|
||||||
__version__ = "1.1.1"
|
__version__ = "1.3.0"
|
||||||
__maintainer__ = "Ali Parlakci"
|
__maintainer__ = "Ali Parlakci"
|
||||||
__email__ = "parlakciali@gmail.com"
|
__email__ = "parlakciali@gmail.com"
|
||||||
|
|
||||||
@@ -143,6 +143,12 @@ def parseArguments(arguments=[]):
|
|||||||
" for downloading later",
|
" for downloading later",
|
||||||
action="store_true",
|
action="store_true",
|
||||||
default=False)
|
default=False)
|
||||||
|
|
||||||
|
parser.add_argument("--exclude",
|
||||||
|
nargs="+",
|
||||||
|
help="Do not download specified links",
|
||||||
|
choices=["imgur","gfycat","direct","self"],
|
||||||
|
type=str)
|
||||||
|
|
||||||
if arguments == []:
|
if arguments == []:
|
||||||
return parser.parse_args()
|
return parser.parse_args()
|
||||||
@@ -246,7 +252,6 @@ class PromptUser:
|
|||||||
# DELETE THE PLUS (+) AT THE END
|
# DELETE THE PLUS (+) AT THE END
|
||||||
GLOBAL.arguments.subreddit = GLOBAL.arguments.subreddit[:-1]
|
GLOBAL.arguments.subreddit = GLOBAL.arguments.subreddit[:-1]
|
||||||
|
|
||||||
print(GLOBAL.arguments.subreddit)
|
|
||||||
print("\nselect sort type:")
|
print("\nselect sort type:")
|
||||||
sortTypes = [
|
sortTypes = [
|
||||||
"hot","top","new","rising","controversial"
|
"hot","top","new","rising","controversial"
|
||||||
@@ -319,9 +324,37 @@ class PromptUser:
|
|||||||
if Path(GLOBAL.arguments.log ).is_file():
|
if Path(GLOBAL.arguments.log ).is_file():
|
||||||
break
|
break
|
||||||
|
|
||||||
|
GLOBAL.arguments.exclude = []
|
||||||
|
|
||||||
|
sites = ["imgur","gfycat","direct","self"]
|
||||||
|
|
||||||
|
excludeInput = input("exclude: ").lower()
|
||||||
|
if excludeInput in sites and excludeInput != "":
|
||||||
|
GLOBAL.arguments.exclude = [excludeInput]
|
||||||
|
|
||||||
|
while not excludeInput == "":
|
||||||
|
while True:
|
||||||
|
excludeInput = input("exclude: ").lower()
|
||||||
|
if not excludeInput in sites or excludeInput in GLOBAL.arguments.exclude:
|
||||||
|
break
|
||||||
|
elif excludeInput == "":
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
GLOBAL.arguments.exclude.append(excludeInput)
|
||||||
|
|
||||||
|
for i in range(len(GLOBAL.arguments.exclude)):
|
||||||
|
if " " in GLOBAL.arguments.exclude[i]:
|
||||||
|
inputWithWhitespace = GLOBAL.arguments.exclude[i]
|
||||||
|
del GLOBAL.arguments.exclude[i]
|
||||||
|
for siteInput in inputWithWhitespace.split():
|
||||||
|
if siteInput in sites and siteInput not in GLOBAL.arguments.exclude:
|
||||||
|
GLOBAL.arguments.exclude.append(siteInput)
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
GLOBAL.arguments.limit = int(input("\nlimit: "))
|
GLOBAL.arguments.limit = int(input("\nlimit (0 for none): "))
|
||||||
|
if GLOBAL.arguments.limit == 0:
|
||||||
|
GLOBAL.arguments.limit = None
|
||||||
break
|
break
|
||||||
except ValueError:
|
except ValueError:
|
||||||
pass
|
pass
|
||||||
@@ -443,6 +476,10 @@ def download(submissions):
|
|||||||
downloadedCount = subsLenght
|
downloadedCount = subsLenght
|
||||||
duplicates = 0
|
duplicates = 0
|
||||||
BACKUP = {}
|
BACKUP = {}
|
||||||
|
if GLOBAL.arguments.exclude is not None:
|
||||||
|
ToBeDownloaded = GLOBAL.arguments.exclude
|
||||||
|
else:
|
||||||
|
ToBeDownloaded = []
|
||||||
|
|
||||||
FAILED_FILE = createLogFile("FAILED")
|
FAILED_FILE = createLogFile("FAILED")
|
||||||
|
|
||||||
@@ -465,7 +502,7 @@ def download(submissions):
|
|||||||
|
|
||||||
directory = GLOBAL.directory / submissions[i]['postSubreddit']
|
directory = GLOBAL.directory / submissions[i]['postSubreddit']
|
||||||
|
|
||||||
if submissions[i]['postType'] == 'imgur':
|
if submissions[i]['postType'] == 'imgur' and not 'imgur' in ToBeDownloaded:
|
||||||
print("IMGUR",end="")
|
print("IMGUR",end="")
|
||||||
|
|
||||||
while int(time.time() - lastRequestTime) <= 2:
|
while int(time.time() - lastRequestTime) <= 2:
|
||||||
@@ -528,7 +565,7 @@ def download(submissions):
|
|||||||
)
|
)
|
||||||
downloadedCount -= 1
|
downloadedCount -= 1
|
||||||
|
|
||||||
elif submissions[i]['postType'] == 'gfycat':
|
elif submissions[i]['postType'] == 'gfycat' and not 'gfycat' in ToBeDownloaded:
|
||||||
print("GFYCAT")
|
print("GFYCAT")
|
||||||
try:
|
try:
|
||||||
Gfycat(directory,submissions[i])
|
Gfycat(directory,submissions[i])
|
||||||
@@ -539,7 +576,7 @@ def download(submissions):
|
|||||||
downloadedCount -= 1
|
downloadedCount -= 1
|
||||||
|
|
||||||
except NotADownloadableLinkError as exception:
|
except NotADownloadableLinkError as exception:
|
||||||
print("Could not read the page source")
|
print(exception)
|
||||||
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
||||||
downloadedCount -= 1
|
downloadedCount -= 1
|
||||||
|
|
||||||
@@ -548,7 +585,7 @@ def download(submissions):
|
|||||||
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
||||||
downloadedCount -= 1
|
downloadedCount -= 1
|
||||||
|
|
||||||
elif submissions[i]['postType'] == 'direct':
|
elif submissions[i]['postType'] == 'direct' and not 'direct' in ToBeDownloaded:
|
||||||
print("DIRECT")
|
print("DIRECT")
|
||||||
try:
|
try:
|
||||||
Direct(directory,submissions[i])
|
Direct(directory,submissions[i])
|
||||||
@@ -563,7 +600,7 @@ def download(submissions):
|
|||||||
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
FAILED_FILE.add({int(i+1):[str(exception),submissions[i]]})
|
||||||
downloadedCount -= 1
|
downloadedCount -= 1
|
||||||
|
|
||||||
elif submissions[i]['postType'] == 'self':
|
elif submissions[i]['postType'] == 'self' and not 'self' in ToBeDownloaded:
|
||||||
print("SELF")
|
print("SELF")
|
||||||
try:
|
try:
|
||||||
Self(directory,submissions[i])
|
Self(directory,submissions[i])
|
||||||
@@ -609,8 +646,9 @@ def main():
|
|||||||
print(err)
|
print(err)
|
||||||
sys.exit()
|
sys.exit()
|
||||||
|
|
||||||
GLOBAL.config = getConfig("config.json")
|
if not Path(GLOBAL.configDirectory).is_dir():
|
||||||
|
os.makedirs(GLOBAL.configDirectory)
|
||||||
|
GLOBAL.config = getConfig(GLOBAL.configDirectory / "config.json")
|
||||||
|
|
||||||
if GLOBAL.arguments.log is not None:
|
if GLOBAL.arguments.log is not None:
|
||||||
logDir = Path(GLOBAL.arguments.log)
|
logDir = Path(GLOBAL.arguments.log)
|
||||||
@@ -666,7 +704,10 @@ if __name__ == "__main__":
|
|||||||
GLOBAL.directory = Path(".\\")
|
GLOBAL.directory = Path(".\\")
|
||||||
print("\nQUITTING...")
|
print("\nQUITTING...")
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
logging.error("Runtime error!", exc_info=full_exc_info(sys.exc_info()))
|
if GLOBAL.directory is None:
|
||||||
|
GLOBAL.directory = Path(".\\")
|
||||||
|
logging.error(sys.exc_info()[0].__name__,
|
||||||
|
exc_info=full_exc_info(sys.exc_info()))
|
||||||
print(log_stream.getvalue())
|
print(log_stream.getvalue())
|
||||||
|
|
||||||
input("Press enter to quit\n")
|
input("Press enter to quit\n")
|
||||||
|
|||||||
50
setup.py
Normal file
50
setup.py
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
#!C:\Users\Ali\AppData\Local\Programs\Python\Python36\python.exe
|
||||||
|
|
||||||
|
## python setup.py build
|
||||||
|
import sys
|
||||||
|
from cx_Freeze import setup, Executable
|
||||||
|
from script import __version__
|
||||||
|
|
||||||
|
options = {
|
||||||
|
"build_exe": {
|
||||||
|
"packages":[
|
||||||
|
"idna","imgurpython", "praw", "requests"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if sys.platform == "win32":
|
||||||
|
executables = [Executable(
|
||||||
|
"script.py",
|
||||||
|
targetName="bulk-downloader-for-reddit.exe",
|
||||||
|
shortcutName="Bulk Downloader for Reddit",
|
||||||
|
shortcutDir="DesktopFolder"
|
||||||
|
)]
|
||||||
|
|
||||||
|
elif sys.platform == "linux":
|
||||||
|
executables = [Executable(
|
||||||
|
"script.py",
|
||||||
|
targetName="bulk-downloader-for-reddit",
|
||||||
|
shortcutName="Bulk Downloader for Reddit",
|
||||||
|
shortcutDir="DesktopFolder"
|
||||||
|
)]
|
||||||
|
|
||||||
|
setup(
|
||||||
|
name = "Bulk Downloader for Reddit",
|
||||||
|
version = __version__,
|
||||||
|
description = "Bulk Downloader for Reddit",
|
||||||
|
author = "Ali Parlakci",
|
||||||
|
author_email="parlakciali@gmail.com",
|
||||||
|
url="https://github.com/aliparlakci/bulk-downloader-for-reddit",
|
||||||
|
classifiers=(
|
||||||
|
"Programming Language :: Python :: 3",
|
||||||
|
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)"
|
||||||
|
"Natural Language :: English",
|
||||||
|
"Environment :: Console",
|
||||||
|
"Operating System :: OS Independent",
|
||||||
|
),
|
||||||
|
executables = executables,
|
||||||
|
options = options
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -36,7 +36,10 @@ def getExtension(link):
|
|||||||
if TYPE in parsed:
|
if TYPE in parsed:
|
||||||
return "."+parsed[-1]
|
return "."+parsed[-1]
|
||||||
else:
|
else:
|
||||||
return '.jpg'
|
if not "v.redd.it" in link:
|
||||||
|
return '.jpg'
|
||||||
|
else:
|
||||||
|
return '.mp4'
|
||||||
|
|
||||||
def getFile(fileDir,tempDir,imageURL,indent=0):
|
def getFile(fileDir,tempDir,imageURL,indent=0):
|
||||||
"""Downloads given file to given directory.
|
"""Downloads given file to given directory.
|
||||||
@@ -169,7 +172,9 @@ class Imgur:
|
|||||||
if duplicates == imagesLenght:
|
if duplicates == imagesLenght:
|
||||||
raise FileAlreadyExistsError
|
raise FileAlreadyExistsError
|
||||||
elif howManyDownloaded < imagesLenght:
|
elif howManyDownloaded < imagesLenght:
|
||||||
raise AlbumNotDownloadedCompletely
|
raise AlbumNotDownloadedCompletely(
|
||||||
|
"Album Not Downloaded Completely"
|
||||||
|
)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def initImgur():
|
def initImgur():
|
||||||
@@ -217,9 +222,9 @@ class Gfycat:
|
|||||||
try:
|
try:
|
||||||
POST['mediaURL'] = self.getLink(POST['postURL'])
|
POST['mediaURL'] = self.getLink(POST['postURL'])
|
||||||
except IndexError:
|
except IndexError:
|
||||||
raise NotADownloadableLinkError
|
raise NotADownloadableLinkError("Could not read the page source")
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
raise NotADownloadableLinkError
|
raise NotADownloadableLinkError("Could not read the page source")
|
||||||
|
|
||||||
POST['postExt'] = getExtension(POST['mediaURL'])
|
POST['postExt'] = getExtension(POST['mediaURL'])
|
||||||
|
|
||||||
@@ -248,8 +253,7 @@ class Gfycat:
|
|||||||
if url[-1:] == '/':
|
if url[-1:] == '/':
|
||||||
url = url[:-1]
|
url = url[:-1]
|
||||||
|
|
||||||
if 'gifs' in url:
|
url = "https://gfycat.com/" + url.split('/')[-1]
|
||||||
url = "https://gfycat.com/" + url.split('/')[-1]
|
|
||||||
|
|
||||||
pageSource = (urllib.request.urlopen(url).read().decode().split('\n'))
|
pageSource = (urllib.request.urlopen(url).read().decode().split('\n'))
|
||||||
|
|
||||||
@@ -266,7 +270,7 @@ class Gfycat:
|
|||||||
break
|
break
|
||||||
|
|
||||||
if "".join(link) == "":
|
if "".join(link) == "":
|
||||||
raise NotADownloadableLinkError
|
raise NotADownloadableLinkError("Could not read the page source")
|
||||||
|
|
||||||
return "".join(link)
|
return "".join(link)
|
||||||
|
|
||||||
|
|||||||
@@ -89,7 +89,7 @@ def beginPraw(config,user_agent = str(socket.gethostname())):
|
|||||||
authorizedInstance = GetAuth(reddit,port).getRefreshToken(*scopes)
|
authorizedInstance = GetAuth(reddit,port).getRefreshToken(*scopes)
|
||||||
reddit = authorizedInstance[0]
|
reddit = authorizedInstance[0]
|
||||||
refresh_token = authorizedInstance[1]
|
refresh_token = authorizedInstance[1]
|
||||||
jsonFile("config.json").add({
|
jsonFile(GLOBAL.configDirectory / "config.json").add({
|
||||||
"reddit_refresh_token":refresh_token
|
"reddit_refresh_token":refresh_token
|
||||||
})
|
})
|
||||||
else:
|
else:
|
||||||
@@ -98,7 +98,7 @@ def beginPraw(config,user_agent = str(socket.gethostname())):
|
|||||||
authorizedInstance = GetAuth(reddit,port).getRefreshToken(*scopes)
|
authorizedInstance = GetAuth(reddit,port).getRefreshToken(*scopes)
|
||||||
reddit = authorizedInstance[0]
|
reddit = authorizedInstance[0]
|
||||||
refresh_token = authorizedInstance[1]
|
refresh_token = authorizedInstance[1]
|
||||||
jsonFile("config.json").add({
|
jsonFile(GLOBAL.configDirectory / "config.json").add({
|
||||||
"reddit_refresh_token":refresh_token
|
"reddit_refresh_token":refresh_token
|
||||||
})
|
})
|
||||||
return reddit
|
return reddit
|
||||||
@@ -397,8 +397,9 @@ def checkIfMatching(submission):
|
|||||||
imgurCount += 1
|
imgurCount += 1
|
||||||
return details
|
return details
|
||||||
|
|
||||||
elif isDirectLink(submission.url):
|
elif isDirectLink(submission.url) is not False:
|
||||||
details['postType'] = 'direct'
|
details['postType'] = 'direct'
|
||||||
|
details['postURL'] = isDirectLink(submission.url)
|
||||||
directCount += 1
|
directCount += 1
|
||||||
return details
|
return details
|
||||||
|
|
||||||
@@ -435,7 +436,7 @@ def printSubmission(SUB,validNumber,totalNumber):
|
|||||||
|
|
||||||
def isDirectLink(URL):
|
def isDirectLink(URL):
|
||||||
"""Check if link is a direct image link.
|
"""Check if link is a direct image link.
|
||||||
If so, return True,
|
If so, return URL,
|
||||||
if not, return False
|
if not, return False
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@@ -444,10 +445,13 @@ def isDirectLink(URL):
|
|||||||
URL = URL[:-1]
|
URL = URL[:-1]
|
||||||
|
|
||||||
if "i.reddituploads.com" in URL:
|
if "i.reddituploads.com" in URL:
|
||||||
return True
|
return URL
|
||||||
|
|
||||||
|
elif "v.redd.it" in URL:
|
||||||
|
return URL+"/DASH_600_K"
|
||||||
|
|
||||||
for extension in imageTypes:
|
for extension in imageTypes:
|
||||||
if extension in URL:
|
if extension in URL:
|
||||||
return True
|
return URL
|
||||||
else:
|
else:
|
||||||
return False
|
return False
|
||||||
|
|||||||
@@ -14,6 +14,7 @@ class GLOBAL:
|
|||||||
config = None
|
config = None
|
||||||
arguments = None
|
arguments = None
|
||||||
directory = None
|
directory = None
|
||||||
|
configDirectory = Path.home() / "Bulk Downloader for Reddit"
|
||||||
reddit_client_id = "BSyphDdxYZAgVQ"
|
reddit_client_id = "BSyphDdxYZAgVQ"
|
||||||
reddit_client_secret = "bfqNJaRh8NMh-9eAr-t4TRz-Blk"
|
reddit_client_secret = "bfqNJaRh8NMh-9eAr-t4TRz-Blk"
|
||||||
printVanilla = print
|
printVanilla = print
|
||||||
|
|||||||
Reference in New Issue
Block a user