22 Commits

Author SHA1 Message Date
Ali
f3d3d5fcd4 Removed parser file 2019-04-28 20:52:46 +03:00
Ali
3a07c38f08 Removed link mode 2019-04-28 20:50:28 +03:00
Ali Parlakçı
35e551f20c Update README.md 2019-04-23 14:04:15 +03:00
Ali Parlakçı
0f2bda9c34 Merge pull request #63 from aliparlakci/moreUsefulReadme
A more useful readme (credits to *stared*)
2019-04-23 14:00:53 +03:00
Ali Parlakçı
8ab694bcc1 Fixed typo 2019-04-23 13:59:01 +03:00
Ali
898f59d035 Added an FAQ entry 2019-04-23 13:51:21 +03:00
Ali
6b6db37185 Minor corrections 2019-04-23 13:29:58 +03:00
Piotr Migdał
d4a5100128 a clearer description how to run it (#62) 2019-04-23 13:17:15 +03:00
Ali
22047338e2 Update version number 2019-04-09 20:45:22 +03:00
Ali
b16cdd3cbb Hopefully, fixed the config.json bug 2019-04-09 20:31:42 +03:00
Ali
2a8394a48c Fixed the bug concerning config.json 2019-04-08 22:09:52 +03:00
Ali Parlakçı
eac4404bbf Update README.md 2019-03-31 11:59:49 +03:00
Ali Parlakci
fae49d50da Update version 2019-03-31 11:46:03 +03:00
Ali Parlakci
7130525ece Update version 2019-03-31 11:35:27 +03:00
Ali Parlakci
2bf1e03ee1 Update version 2019-03-31 11:33:29 +03:00
Ali
15a91e5784 Fixed saving auth info problem 2019-02-24 12:28:40 +03:00
Ali
344201a70d Fixed v.redd.it links 2019-02-23 00:01:39 +03:00
Ali
92e47adb43 Update version 2019-02-22 23:59:57 +03:00
Ali
4d385fda60 Fixed v.redd.it links 2019-02-22 23:59:03 +03:00
Ali Parlakci
82dcd2f63d Bug fix 2019-01-27 17:05:31 +03:00
Ali Parlakci
08de21a364 Updated Python3 version 2019-01-27 16:32:43 +03:00
Ali Parlakci
af7d3d9151 Moved FAQ 2019-01-27 16:32:00 +03:00
9 changed files with 186 additions and 321 deletions

143
README.md
View File

@@ -1,9 +1,11 @@
# Bulk Downloader for Reddit
Downloads media from reddit posts.
## [Download the latest release](https://github.com/aliparlakci/bulk-downloader-for-reddit/releases/latest)
Downloads media from reddit posts. Made by [u/aliparlakci](https://reddit.com/u/aliparlakci)
## [Download the latest release here](https://github.com/aliparlakci/bulk-downloader-for-reddit/releases/latest)
## What it can do
- Can get posts from: frontpage, subreddits, multireddits, redditor's submissions, upvoted and saved posts; search results or just plain reddit links
- Sorts posts by hot, top, new and so on
- Downloads **REDDIT** images and videos, **IMGUR** images and albums, **GFYCAT** links, **EROME** images and albums, **SELF POSTS** and any link to a **DIRECT IMAGE**
@@ -13,17 +15,134 @@ Downloads media from reddit posts.
- Saves a reusable copy of posts' details that are found so that they can be re-downloaded again
- Logs failed ones in a file to so that you can try to download them later
## **[Compiling it from source code](docs/COMPILE_FROM_SOURCE.md)**
*\* MacOS users have to use this option.*
## Installation
## Additional options
Script also accepts additional options via command-line arguments. Get further information from **[`--help`](docs/COMMAND_LINE_ARGUMENTS.md)**
You can use it either as a `bulk-downloader-for-reddit.exe` executable file for Windows, as a Linux binary or as a *[Python script](#python-script)*. There is no MacOS executable, MacOS users must use the Python script option.
### Executables
For Windows and Linux, [download the latest executables, here](https://github.com/aliparlakci/bulk-downloader-for-reddit/releases/latest).
### Python script
* Download this repository ([latest zip](https://github.com/aliparlakci/bulk-downloader-for-reddit/archive/master.zip) or `git clone git@github.com:aliparlakci/bulk-downloader-for-reddit.git`).
* Enter its folder.
* Run `python ./script.py` from the command-line (Windows, MacOSX or Linux command line; it may work with Anaconda prompt) See [here](docs/INTERPRET_FROM_SOURCE.md#finding-the-correct-keyword-for-python) if you have any trouble with this step.
It uses Python 3.6 and above. It won't work with Python 3.5 or any Python 2.x. If you have a trouble setting it up, see [here](docs/INTERPRET_FROM_SOURCE.md).
### Setting up the script
## Setting up the script
You need to create an imgur developer app in order API to work. Go to https://api.imgur.com/oauth2/addclient and fill the form (It does not really matter how you fill it).
It should redirect you to a page where it shows your **imgur_client_id** and **imgur_client_secret**.
## [FAQ](docs/FAQ.md)
## [Changes on *master*](docs/CHANGELOG.md)
It should redirect you to a page where it shows your **imgur_client_id** and **imgur_client_secret**.
When you run it for the first time, it will automatically create `config.json` file containing `imgur_client_id`, `imgur_client_secret`, `reddit_username` and `reddit_refresh_token`.
## Running
You can run it it an interactive mode, or using [command-line arguments](docs/COMMAND_LINE_ARGUMENTS.md) (also available via `python ./script.py --help` or `bulk-downloader-for-reddit.exe --help`).
To run the interactive mode, simply use `python ./script.py` or double click on `bulk-downloader-for-reddit.exe` without any extra commands.
### [Example for command line arguemnts](docs/COMMAND_LINE_ARGUMENTS.md#examples)
### Example for an interactive script
```
(py37) bulk-downloader-for-reddit user$ python ./script.py
Bulk Downloader for Reddit v1.6.5
Written by Ali PARLAKCI parlakciali@gmail.com
https://github.com/aliparlakci/bulk-downloader-for-reddit/
download directory: downloads/dataisbeautiful_last_few
select program mode:
[1] search
[2] subreddit
[3] multireddit
[4] submitted
[5] upvoted
[6] saved
[7] log
[0] exit
> 2
(type frontpage for all subscribed subreddits,
use plus to seperate multi subreddits: pics+funny+me_irl etc.)
subreddit: dataisbeautiful
select sort type:
[1] hot
[2] top
[3] new
[4] rising
[5] controversial
[0] exit
> 1
limit (0 for none): 50
GETTING POSTS
(1/24) r/dataisbeautiful
AutoModerator_[Battle]_DataViz_Battle_for_the_month_of_April_2019__Visualize_the_April_Fool's_Prank_for_2019-04-01_on__r_DataIsBeautiful_b8ws37.md
Downloaded
(2/24) r/dataisbeautiful
AutoModerator_[Topic][Open]_Open_Discussion_Monday_—_Anybody_can_post_a_general_visualization_question_or_start_a_fresh_discussion!_bg1wej.md
Downloaded
...
Total of 24 links downloaded!
Press enter to quit
```
## FAQ
### I am running the script on an headless machine or a remote server. How can I authenticate my reddit account?
- Download the script on your everday computer and run it for once.
- Authenticate the program on both reddit and imgur.
- Go to your Home folder (for Windows users it is `C:\Users\[USERNAME]\`, for Linux users it is `/home/[USERNAME]`)
- Copy the *config.json* file inside the Bulk Downloader for Reddit folder and paste it **next to** the file that you run the program.
### How can I change my credentials?
- All of the user data is held in **config.json** file which is in a folder named "Bulk Downloader for Reddit" in your **Home** directory. You can edit them, there.
Also if you already have a config.json file, you can paste it **next to** the script and override the one on your Home directory.
### What do the dots resemble when getting posts?
- Each dot means that 100 posts are scanned.
### Getting posts takes too long.
- You can press *Ctrl+C* to interrupt it and start downloading.
### How are the filenames formatted?
- **Self posts** and **images** that do not belong to an album and **album folders** are formatted as:
`[SUBMITTER NAME]_[POST TITLE]_[REDDIT ID]`
You can use *reddit id* to go to post's reddit page by going to link reddit.com/[REDDIT ID]
- An **image in an album** is formatted as:
`[ITEM NUMBER]_[IMAGE TITLE]_[IMGUR ID]`
Similarly, you can use *imgur id* to go to image's imgur page by going to link imgur.com/[IMGUR ID].
### How do I open self post files?
- Self posts are held at reddit as styled with markdown. So, the script downloads them as they are in order not to lose their stylings.
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
However, they are basically text files. You can also view them with any text editor such as Notepad on Windows, gedit on Linux or Text Editor on MacOS
## Changelog
* [See the changes on *master* here](docs/CHANGELOG.md)

View File

@@ -1,4 +1,7 @@
# Changes on *master*
## [23/02/2019](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/4d385fda60028343be816eb7c4f7bc613a9d555d)
- Fixed v.redd.it links
## [27/01/2019](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/b7baf07fb5998368d87e3c4c36aed40daf820609)
- Clarified the instructions
@@ -80,4 +83,4 @@
## [10/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/ffe3839aee6dc1a552d95154d817aefc2b66af81)
- Added support for *self* post
- Now getting posts is quicker
- Now getting posts is quicker

View File

@@ -1,6 +1,6 @@
# Using command-line arguments
See **[compiling from source](COMPILE_FROM_SOURCE.md)** page first unless you are using an executable file. If you are using an executable file, see [using terminal](COMPILE_FROM_SOURCE.md#using-terminal) and come back.
See **[compiling from source](INTERPRET_FROM_SOURCE.md)** page first unless you are using an executable file. If you are using an executable file, see [using terminal](INTERPRET_FROM_SOURCE.md#using-terminal) and come back.
***Use*** `.\bulk-downloader-for-reddit.exe` ***or*** `./bulk-downloader-for-reddit` ***if you are using the executable***.
```console
@@ -98,4 +98,4 @@ python script.py --directory C:\\NEW_FOLDER\\ANOTHER_FOLDER --log UNNAMED_FOLDER
# FAQ
## I can't startup the script no matter what.
See **[finding the correct keyword for Python](COMPILE_FROM_SOURCE.md#finding-the-correct-keyword-for-python)**
See **[finding the correct keyword for Python](INTERPRET_FROM_SOURCE.md#finding-the-correct-keyword-for-python)**

View File

@@ -1,23 +0,0 @@
# FAQ
## What do the dots resemble when getting posts?
- Each dot means that 100 posts are scanned.
## Getting posts is taking too long.
- You can press Ctrl+C to interrupt it and start downloading.
## How are filenames formatted?
- Self posts and images that are not belong to an album are formatted as **`[SUBMITTER NAME]_[POST TITLE]_[REDDIT ID]`**.
You can use *reddit id* to go to post's reddit page by going to link **reddit.com/[REDDIT ID]**
- An image in an imgur album is formatted as **`[ITEM NUMBER]_[IMAGE TITLE]_[IMGUR ID]`**
Similarly, you can use *imgur id* to go to image's imgur page by going to link **imgur.com/[IMGUR ID]**.
## How do I open self post files?
- Self posts are held at reddit as styled with markdown. So, the script downloads them as they are in order not to lose their stylings.
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
However, they are basically text files. You can also view them with any text editor such as Notepad on Windows, gedit on Linux or Text Editor on MacOS
## How can I change my credentials?
- All of the user data is held in **config.json** file which is in a folder named "Bulk Downloader for Reddit" in your **Home** directory. You can edit
them, there.

View File

@@ -1,16 +1,16 @@
# Compiling from source code
# Interpret from source code
## Requirements
### Python 3 Interpreter
Latest* version of **Python 3** is needed. See if it is already installed [here](#finding-the-correct-keyword-for-python). If not, download the matching release for your platform [here](https://www.python.org/downloads/) and install it. If you are a *Windows* user, selecting **Add Python 3 to PATH** option when installing the software is mandatory.
\* *Use Python 3.6.5 if you encounter an issue*
- This program is designed to work best on **Python 3.6.5** and this version of Python 3 is suggested. See if it is already installed, [here](#finding-the-correct-keyword-for-python).
- If not, download the matching release for your platform [here](https://www.python.org/downloads/) and install it. If you are a *Windows* user, selecting **Add Python 3 to PATH** option when installing the software is mandatory.
## Using terminal
### To open it...
- **On Windows**: Press **Shift+Right Click**, select **Open Powershell window here** or **Open Command Prompt window here**
- **on Windows**: Press **Shift+Right Click**, select **Open Powershell window here** or **Open Command Prompt window here**
- **On Linux**: Right-click in a folder and select **Open Terminal** or press **Ctrl+Alt+T**.
- **on Linux**: Right-click in a folder and select **Open Terminal** or press **Ctrl+Alt+T**.
- **On MacOS**: Look for an app called **Terminal**.
- **on MacOS**: Look for an app called **Terminal**.
### Navigating to the directory where script is downloaded
Go inside the folder where script.py is located. If you are not familiar with changing directories on command-prompt and terminal read *Changing Directories* in [this article](https://lifehacker.com/5633909/who-needs-a-mouse-learn-to-use-the-command-line-for-almost-anything)

View File

@@ -16,14 +16,13 @@ from pathlib import Path, PurePath
from src.downloader import Direct, Erome, Gfycat, Imgur, Self
from src.errors import *
from src.parser import LinkDesigner
from src.searcher import getPosts
from src.tools import (GLOBAL, createLogFile, jsonFile, nameCorrector,
printToFile)
__author__ = "Ali Parlakci"
__license__ = "GPL"
__version__ = "1.6.4.1"
__version__ = "1.6.5"
__maintainer__ = "Ali Parlakci"
__email__ = "parlakciali@gmail.com"
@@ -98,10 +97,6 @@ def parseArguments(arguments=[]):
action="store_true",
default=False)
parser.add_argument("--link","-l",
help="Get posts from link",
metavar="link")
parser.add_argument("--saved",
action="store_true",
help="Triggers saved mode")
@@ -279,7 +274,8 @@ class PromptUser:
GLOBAL.arguments.subreddit = "+".join(GLOBAL.arguments.subreddit.split())
# DELETE THE PLUS (+) AT THE END
if not subredditInput.lower() == "frontpage":
if not subredditInput.lower() == "frontpage" \
and GLOBAL.arguments.subreddit[-1] == "+":
GLOBAL.arguments.subreddit = GLOBAL.arguments.subreddit[:-1]
print("\nselect sort type:")
@@ -388,21 +384,6 @@ def prepareAttributes():
else:
ATTRIBUTES["time"] = "all"
if GLOBAL.arguments.link is not None:
GLOBAL.arguments.link = GLOBAL.arguments.link.strip("\"")
ATTRIBUTES = LinkDesigner(GLOBAL.arguments.link)
if GLOBAL.arguments.search is not None:
ATTRIBUTES["search"] = GLOBAL.arguments.search
if GLOBAL.arguments.sort is not None:
ATTRIBUTES["sort"] = GLOBAL.arguments.sort
if GLOBAL.arguments.time is not None:
ATTRIBUTES["time"] = GLOBAL.arguments.time
elif GLOBAL.arguments.subreddit is not None:
if type(GLOBAL.arguments.subreddit) == list:
GLOBAL.arguments.subreddit = "+".join(GLOBAL.arguments.subreddit)
@@ -671,10 +652,15 @@ def main():
except ProgramModeError as err:
PromptUser()
if not Path(GLOBAL.configDirectory).is_dir():
os.makedirs(GLOBAL.configDirectory)
GLOBAL.config = getConfig("config.json") if Path("config.json").exists() \
else getConfig(GLOBAL.configDirectory / "config.json")
if not Path(GLOBAL.defaultConfigDirectory).is_dir():
os.makedirs(GLOBAL.defaultConfigDirectory)
if Path("config.json").exists():
GLOBAL.configDirectory = Path("config.json")
else:
GLOBAL.configDirectory = GLOBAL.defaultConfigDirectory / "config.json"
GLOBAL.config = getConfig(GLOBAL.configDirectory)
if GLOBAL.arguments.log is not None:
logDir = Path(GLOBAL.arguments.log)

View File

@@ -1,240 +0,0 @@
from pprint import pprint
try:
from src.errors import InvalidRedditLink
except ModuleNotFoundError:
from errors import InvalidRedditLink
def QueryParser(PassedQueries,index):
ExtractedQueries = {}
QuestionMarkIndex = PassedQueries.index("?")
Header = PassedQueries[:QuestionMarkIndex]
ExtractedQueries["HEADER"] = Header
Queries = PassedQueries[QuestionMarkIndex+1:]
ParsedQueries = Queries.split("&")
for Query in ParsedQueries:
Query = Query.split("=")
ExtractedQueries[Query[0]] = Query[1]
if ExtractedQueries["HEADER"] == "search":
ExtractedQueries["q"] = ExtractedQueries["q"].replace("%20"," ")
return ExtractedQueries
def LinkParser(LINK):
RESULT = {}
ShortLink = False
if not "reddit.com" in LINK:
raise InvalidRedditLink("Invalid reddit link")
SplittedLink = LINK.split("/")
if SplittedLink[0] == "https:" or SplittedLink[0] == "http:":
SplittedLink = SplittedLink[2:]
try:
if (SplittedLink[-2].endswith("reddit.com") and \
SplittedLink[-1] == "") or \
SplittedLink[-1].endswith("reddit.com"):
RESULT["sort"] = "best"
return RESULT
except IndexError:
if SplittedLink[0].endswith("reddit.com"):
RESULT["sort"] = "best"
return RESULT
if "redd.it" in SplittedLink:
ShortLink = True
if SplittedLink[0].endswith("reddit.com"):
SplittedLink = SplittedLink[1:]
if "comments" in SplittedLink:
RESULT = {"post":LINK}
return RESULT
elif "me" in SplittedLink or \
"u" in SplittedLink or \
"user" in SplittedLink or \
"r" in SplittedLink or \
"m" in SplittedLink:
if "r" in SplittedLink:
RESULT["subreddit"] = SplittedLink[SplittedLink.index("r") + 1]
elif "m" in SplittedLink:
RESULT["multireddit"] = SplittedLink[SplittedLink.index("m") + 1]
RESULT["user"] = SplittedLink[SplittedLink.index("m") - 1]
else:
for index in range(len(SplittedLink)):
if SplittedLink[index] == "u" or \
SplittedLink[index] == "user":
RESULT["user"] = SplittedLink[index+1]
elif SplittedLink[index] == "me":
RESULT["user"] = "me"
for index in range(len(SplittedLink)):
if SplittedLink[index] in [
"hot","top","new","controversial","rising"
]:
RESULT["sort"] = SplittedLink[index]
if index == 0:
RESULT["subreddit"] = "frontpage"
elif SplittedLink[index] in ["submitted","saved","posts","upvoted"]:
if SplittedLink[index] == "submitted" or \
SplittedLink[index] == "posts":
RESULT["submitted"] = {}
elif SplittedLink[index] == "saved":
RESULT["saved"] = True
elif SplittedLink[index] == "upvoted":
RESULT["upvoted"] = True
elif "?" in SplittedLink[index]:
ParsedQuery = QueryParser(SplittedLink[index],index)
if ParsedQuery["HEADER"] == "search":
del ParsedQuery["HEADER"]
RESULT["search"] = ParsedQuery
elif ParsedQuery["HEADER"] == "submitted" or \
ParsedQuery["HEADER"] == "posts":
del ParsedQuery["HEADER"]
RESULT["submitted"] = ParsedQuery
else:
del ParsedQuery["HEADER"]
RESULT["queries"] = ParsedQuery
if not ("upvoted" in RESULT or \
"saved" in RESULT or \
"submitted" in RESULT or \
"multireddit" in RESULT) and \
"user" in RESULT:
RESULT["submitted"] = {}
return RESULT
def LinkDesigner(LINK):
attributes = LinkParser(LINK)
MODE = {}
if "post" in attributes:
MODE["post"] = attributes["post"]
MODE["sort"] = ""
MODE["time"] = ""
return MODE
elif "search" in attributes:
MODE["search"] = attributes["search"]["q"]
if "restrict_sr" in attributes["search"]:
if not (attributes["search"]["restrict_sr"] == 0 or \
attributes["search"]["restrict_sr"] == "off" or \
attributes["search"]["restrict_sr"] == ""):
if "subreddit" in attributes:
MODE["subreddit"] = attributes["subreddit"]
elif "multireddit" in attributes:
MODE["multreddit"] = attributes["multireddit"]
MODE["user"] = attributes["user"]
else:
MODE["subreddit"] = "all"
else:
MODE["subreddit"] = "all"
if "t" in attributes["search"]:
MODE["time"] = attributes["search"]["t"]
else:
MODE["time"] = "all"
if "sort" in attributes["search"]:
MODE["sort"] = attributes["search"]["sort"]
else:
MODE["sort"] = "relevance"
if "include_over_18" in attributes["search"]:
if attributes["search"]["include_over_18"] == 1 or \
attributes["search"]["include_over_18"] == "on":
MODE["nsfw"] = True
else:
MODE["nsfw"] = False
else:
if "queries" in attributes:
if not ("submitted" in attributes or \
"posts" in attributes):
if "t" in attributes["queries"]:
MODE["time"] = attributes["queries"]["t"]
else:
MODE["time"] = "day"
else:
if "t" in attributes["queries"]:
MODE["time"] = attributes["queries"]["t"]
else:
MODE["time"] = "all"
if "sort" in attributes["queries"]:
MODE["sort"] = attributes["queries"]["sort"]
else:
MODE["sort"] = "new"
else:
MODE["time"] = "day"
if "subreddit" in attributes and not "search" in attributes:
MODE["subreddit"] = attributes["subreddit"]
elif "user" in attributes and not "search" in attributes:
MODE["user"] = attributes["user"]
if "submitted" in attributes:
MODE["submitted"] = True
if "sort" in attributes["submitted"]:
MODE["sort"] = attributes["submitted"]["sort"]
elif "sort" in MODE:
pass
else:
MODE["sort"] = "new"
if "t" in attributes["submitted"]:
MODE["time"] = attributes["submitted"]["t"]
else:
MODE["time"] = "all"
elif "saved" in attributes:
MODE["saved"] = True
elif "upvoted" in attributes:
MODE["upvoted"] = True
elif "multireddit" in attributes:
MODE["multireddit"] = attributes["multireddit"]
if "sort" in attributes:
MODE["sort"] = attributes["sort"]
elif "sort" in MODE:
pass
else:
MODE["sort"] = "hot"
return MODE
if __name__ == "__main__":
while True:
link = input("> ")
pprint(LinkDesigner(link))

View File

@@ -3,6 +3,8 @@ import sys
import random
import socket
import webbrowser
import urllib.request
from urllib.error import HTTPError
import praw
from prawcore.exceptions import NotFound, ResponseException, Forbidden
@@ -93,7 +95,7 @@ def beginPraw(config,user_agent = str(socket.gethostname())):
authorizedInstance = GetAuth(reddit,port).getRefreshToken(*scopes)
reddit = authorizedInstance[0]
refresh_token = authorizedInstance[1]
jsonFile(GLOBAL.configDirectory / "config.json").add({
jsonFile(GLOBAL.configDirectory).add({
"reddit_username":str(reddit.user.me()),
"reddit_refresh_token":refresh_token
})
@@ -103,7 +105,7 @@ def beginPraw(config,user_agent = str(socket.gethostname())):
authorizedInstance = GetAuth(reddit,port).getRefreshToken(*scopes)
reddit = authorizedInstance[0]
refresh_token = authorizedInstance[1]
jsonFile(GLOBAL.configDirectory / "config.json").add({
jsonFile(GLOBAL.configDirectory).add({
"reddit_username":str(reddit.user.me()),
"reddit_refresh_token":refresh_token
})
@@ -422,18 +424,20 @@ def checkIfMatching(submission):
eromeCount += 1
return details
elif isDirectLink(submission.url) is not False:
details['postType'] = 'direct'
details['postURL'] = isDirectLink(submission.url)
directCount += 1
return details
elif submission.is_self:
details['postType'] = 'self'
details['postContent'] = submission.selftext
selfCount += 1
return details
directLink = isDirectLink(submission.url)
if directLink is not False:
details['postType'] = 'direct'
details['postURL'] = directLink
directCount += 1
return details
def printSubmission(SUB,validNumber,totalNumber):
"""Print post's link, title and media link to screen"""
@@ -473,7 +477,22 @@ def isDirectLink(URL):
return URL
elif "v.redd.it" in URL:
return URL+"/DASH_600_K"
bitrates = ["DASH_1080","DASH_720","DASH_600", \
"DASH_480","DASH_360","DASH_240"]
for bitrate in bitrates:
videoURL = URL+"/"+bitrate
try:
responseCode = urllib.request.urlopen(videoURL).getcode()
except urllib.error.HTTPError:
responseCode = 0
if responseCode == 200:
return videoURL
else:
return False
for extension in imageTypes:
if extension in URL:

View File

@@ -14,7 +14,8 @@ class GLOBAL:
config = None
arguments = None
directory = None
configDirectory = Path.home() / "Bulk Downloader for Reddit"
defaultConfigDirectory = Path.home() / "Bulk Downloader for Reddit"
configDirectory = ""
reddit_client_id = "BSyphDdxYZAgVQ"
reddit_client_secret = "bfqNJaRh8NMh-9eAr-t4TRz-Blk"
printVanilla = print