mirror of
https://github.com/KevinMidboe/bulk-downloader-for-reddit.git
synced 2026-01-09 18:55:36 +00:00
Compare commits
27 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8a3dcd68a3 | ||
|
|
ac323f2abe | ||
|
|
32d26fa956 | ||
|
|
137481cf3e | ||
|
|
9b63c55d3e | ||
|
|
3a6954c7d3 | ||
|
|
9a59da0c5f | ||
|
|
d56efed1c6 | ||
|
|
8f64e62293 | ||
|
|
bdc43eb0d8 | ||
|
|
adccd8f3ba | ||
|
|
47a07be1c8 | ||
|
|
1a41dc6061 | ||
|
|
50cb7c15b9 | ||
|
|
a1f1915d57 | ||
|
|
3448ba15a9 | ||
|
|
ff68b5f70f | ||
|
|
588a3c3ea6 | ||
|
|
8f1ff10a5e | ||
|
|
9338961b2b | ||
|
|
94bc1c115f | ||
|
|
c19d8ad71b | ||
|
|
4c8de50880 | ||
|
|
3e6dfccdd2 | ||
|
|
20b9747330 | ||
|
|
be7508540d | ||
|
|
ccd9078b0a |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -2,4 +2,5 @@ build/
|
||||
dist/
|
||||
MANIFEST
|
||||
__pycache__/
|
||||
src/__pycache__/
|
||||
src/__pycache__/
|
||||
config.json
|
||||
96
README.md
96
README.md
@@ -13,101 +13,17 @@ Downloads media from reddit posts.
|
||||
- Saves a reusable copy of posts' details that are found so that they can be re-downloaded again
|
||||
- Logs failed ones in a file to so that you can try to download them later
|
||||
|
||||
## How it works
|
||||
- For **Windows** and **Linux** users, there are executable files to run easily without installing a third party program. But if you are a paranoid like me, you can **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
||||
|
||||
- **MacOS** users have to **[compile it from source code](docs/COMPILE_FROM_SOURCE.md)**.
|
||||
## **[Compiling it from source code](docs/COMPILE_FROM_SOURCE.md)**
|
||||
*\* MacOS users have to use this option.*
|
||||
|
||||
## Additional options
|
||||
Script also accepts additional options via command-line arguments. Get further information from **[`--help`](docs/COMMAND_LINE_ARGUMENTS.md)**
|
||||
|
||||
## Setting up the script
|
||||
You need to create an imgur developer app in order API to work. Go to https://api.imgur.com/oauth2/addclient and fill the form (It does not really matter how you fill it). It should redirect you to a page where it shows your **imgur_client_id** and **imgur_client_secret**.
|
||||
You need to create an imgur developer app in order API to work. Go to https://api.imgur.com/oauth2/addclient and fill the form (It does not really matter how you fill it).
|
||||
|
||||
## FAQ
|
||||
### What do the dots resemble when getting posts?
|
||||
- Each dot means that 100 posts are scanned.
|
||||
It should redirect you to a page where it shows your **imgur_client_id** and **imgur_client_secret**.
|
||||
|
||||
### Getting posts is taking too long.
|
||||
- You can press Ctrl+C to interrupt it and start downloading.
|
||||
|
||||
### How are filenames formatted?
|
||||
- Self posts and images that are not belong to an album are formatted as **`[SUBMITTER NAME]_[POST TITLE]_[REDDIT ID]`**.
|
||||
You can use *reddit id* to go to post's reddit page by going to link **reddit.com/[REDDIT ID]**
|
||||
|
||||
- An image in an imgur album is formatted as **`[ITEM NUMBER]_[IMAGE TITLE]_[IMGUR ID]`**
|
||||
Similarly, you can use *imgur id* to go to image's imgur page by going to link **imgur.com/[IMGUR ID]**.
|
||||
## [FAQ](docs/FAQ.md)
|
||||
|
||||
### How do I open self post files?
|
||||
- Self posts are held at reddit as styled with markdown. So, the script downloads them as they are in order not to lose their stylings.
|
||||
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
|
||||
|
||||
However, they are basically text files. You can also view them with any text editor such as Notepad on Windows, gedit on Linux or Text Editor on MacOS
|
||||
|
||||
### How can I change my credentials?
|
||||
- All of the user data is held in **config.json** file which is in a folder named "Bulk Downloader for Reddit" in your **Home** directory. You can edit
|
||||
them, there.
|
||||
|
||||
## Changes on *master*
|
||||
### [06/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/210238d0865febcb57fbd9f0b0a7d3da9dbff384)
|
||||
- Sending headers when requesting a file in order not to be rejected by server
|
||||
|
||||
### [04/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/426089d0f35212148caff0082708a87017757bde)
|
||||
- Disabled printing post types to console
|
||||
|
||||
### [30/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/af294929510f884d92b25eaa855c29fc4fb6dcaa)
|
||||
- Now opens web browser and goes to Imgur when prompts for Imgur credentials
|
||||
|
||||
### [26/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/1623722138bad80ae39ffcd5fb38baf80680deac)
|
||||
- Improved verbose mode
|
||||
- Minimalized the console output
|
||||
- Added quit option for auto quitting the program after process finishes
|
||||
|
||||
### [25/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/1623722138bad80ae39ffcd5fb38baf80680deac)
|
||||
- Added verbose mode
|
||||
- Stylized the console output
|
||||
|
||||
### [24/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/7a68ff3efac9939f9574c2cef6184b92edb135f4)
|
||||
- Added OP's name to file names (backwards compatible)
|
||||
- Deleted # char from file names (backwards compatible)
|
||||
- Improved exception handling
|
||||
|
||||
### [23/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/7314e17125aa78fd4e6b28e26fda7ec7db7e0147)
|
||||
- Splited download() function
|
||||
- Added erome support
|
||||
- Removed exclude feature
|
||||
- Bug fixes
|
||||
|
||||
### [22/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/6e7463005051026ad64006a8580b0b5dc9536b8c)
|
||||
- Put log files in a folder named "LOG_FILES"
|
||||
- Fixed the bug that makes multireddit mode unusable
|
||||
|
||||
### [21/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/4a8c2377f9fb4d60ed7eeb8d50aaf9a26492462a)
|
||||
- Added exclude mode
|
||||
|
||||
### [20/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/7548a010198fb693841ca03654d2c9bdf5742139)
|
||||
- "0" input for no limit
|
||||
- Fixed the bug that recognizes none image direct links as image links
|
||||
|
||||
### [19/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/41cbb58db34f500a8a5ecc3ac4375bf6c3b275bb)
|
||||
- Added v.redd.it support
|
||||
- Added custom exception descriptions to FAILED.json file
|
||||
- Fixed the bug that prevents downloading some gfycat URLs
|
||||
|
||||
### [13/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/9f831e1b784a770c82252e909462871401a05c11)
|
||||
- Changed config.json file's path to home directory
|
||||
|
||||
### [12/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/50a77f6ba54c24f5647d5ea4e177400b71ff04a7)
|
||||
- Added binaries for Windows and Linux
|
||||
- Wait on KeyboardInterrupt
|
||||
- Accept multiple subreddit input
|
||||
- Fixed the bug that prevents choosing "[0] exit" with typing "exit"
|
||||
|
||||
### [11/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/a28a7776ab826dea2a8d93873a94cd46db3a339b)
|
||||
- Improvements on UX and UI
|
||||
- Added logging errors to CONSOLE_LOG.txt
|
||||
- Using current directory if directory has not been given yet.
|
||||
|
||||
### [10/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/ffe3839aee6dc1a552d95154d817aefc2b66af81)
|
||||
- Added support for *self* post
|
||||
- Now getting posts is quicker
|
||||
## [Changes on *master*](docs/CHANGELOG.md)
|
||||
76
docs/CHANGELOG.md
Normal file
76
docs/CHANGELOG.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Changes on *master*
|
||||
## [16/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/d56efed1c6833a66322d9158523b89d0ce57f5de)
|
||||
- Fix the bug that prevents downloading imgur videos
|
||||
|
||||
## [15/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/adccd8f3ba03ad124d58643d78dab287a4123a6f)
|
||||
- Prints out the title of posts' that are already downloaded
|
||||
|
||||
## [13/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/50cb7c15b9cb4befce0cfa2c23ab5de4af9176c6)
|
||||
- Added alternative location of current directory for config file
|
||||
- Fixed console prints on Linux
|
||||
|
||||
## [10/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/8f1ff10a5e11464575284210dbba4a0d387bc1c3)
|
||||
- Added reddit username to config file
|
||||
|
||||
## [06/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/210238d0865febcb57fbd9f0b0a7d3da9dbff384)
|
||||
- Sending headers when requesting a file in order not to be rejected by server
|
||||
|
||||
## [04/08/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/426089d0f35212148caff0082708a87017757bde)
|
||||
- Disabled printing post types to console
|
||||
|
||||
## [30/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/af294929510f884d92b25eaa855c29fc4fb6dcaa)
|
||||
- Now opens web browser and goes to Imgur when prompts for Imgur credentials
|
||||
|
||||
## [26/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/1623722138bad80ae39ffcd5fb38baf80680deac)
|
||||
- Improved verbose mode
|
||||
- Minimalized the console output
|
||||
- Added quit option for auto quitting the program after process finishes
|
||||
|
||||
## [25/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/1623722138bad80ae39ffcd5fb38baf80680deac)
|
||||
- Added verbose mode
|
||||
- Stylized the console output
|
||||
|
||||
## [24/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/7a68ff3efac9939f9574c2cef6184b92edb135f4)
|
||||
- Added OP's name to file names (backwards compatible)
|
||||
- Deleted # char from file names (backwards compatible)
|
||||
- Improved exception handling
|
||||
|
||||
## [23/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/7314e17125aa78fd4e6b28e26fda7ec7db7e0147)
|
||||
- Splited download() function
|
||||
- Added erome support
|
||||
- Removed exclude feature
|
||||
- Bug fixes
|
||||
|
||||
## [22/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/6e7463005051026ad64006a8580b0b5dc9536b8c)
|
||||
- Put log files in a folder named "LOG_FILES"
|
||||
- Fixed the bug that makes multireddit mode unusable
|
||||
|
||||
## [21/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/4a8c2377f9fb4d60ed7eeb8d50aaf9a26492462a)
|
||||
- Added exclude mode
|
||||
|
||||
## [20/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/7548a010198fb693841ca03654d2c9bdf5742139)
|
||||
- "0" input for no limit
|
||||
- Fixed the bug that recognizes none image direct links as image links
|
||||
|
||||
## [19/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/41cbb58db34f500a8a5ecc3ac4375bf6c3b275bb)
|
||||
- Added v.redd.it support
|
||||
- Added custom exception descriptions to FAILED.json file
|
||||
- Fixed the bug that prevents downloading some gfycat URLs
|
||||
|
||||
## [13/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/9f831e1b784a770c82252e909462871401a05c11)
|
||||
- Changed config.json file's path to home directory
|
||||
|
||||
## [12/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/50a77f6ba54c24f5647d5ea4e177400b71ff04a7)
|
||||
- Added binaries for Windows and Linux
|
||||
- Wait on KeyboardInterrupt
|
||||
- Accept multiple subreddit input
|
||||
- Fixed the bug that prevents choosing "[0] exit" with typing "exit"
|
||||
|
||||
## [11/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/a28a7776ab826dea2a8d93873a94cd46db3a339b)
|
||||
- Improvements on UX and UI
|
||||
- Added logging errors to CONSOLE_LOG.txt
|
||||
- Using current directory if directory has not been given yet.
|
||||
|
||||
## [10/07/2018](https://github.com/aliparlakci/bulk-downloader-for-reddit/tree/ffe3839aee6dc1a552d95154d817aefc2b66af81)
|
||||
- Added support for *self* post
|
||||
- Now getting posts is quicker
|
||||
23
docs/FAQ.md
Normal file
23
docs/FAQ.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# FAQ
|
||||
## What do the dots resemble when getting posts?
|
||||
- Each dot means that 100 posts are scanned.
|
||||
|
||||
## Getting posts is taking too long.
|
||||
- You can press Ctrl+C to interrupt it and start downloading.
|
||||
|
||||
## How are filenames formatted?
|
||||
- Self posts and images that are not belong to an album are formatted as **`[SUBMITTER NAME]_[POST TITLE]_[REDDIT ID]`**.
|
||||
You can use *reddit id* to go to post's reddit page by going to link **reddit.com/[REDDIT ID]**
|
||||
|
||||
- An image in an imgur album is formatted as **`[ITEM NUMBER]_[IMAGE TITLE]_[IMGUR ID]`**
|
||||
Similarly, you can use *imgur id* to go to image's imgur page by going to link **imgur.com/[IMGUR ID]**.
|
||||
|
||||
## How do I open self post files?
|
||||
- Self posts are held at reddit as styled with markdown. So, the script downloads them as they are in order not to lose their stylings.
|
||||
However, there is a [great Chrome extension](https://chrome.google.com/webstore/detail/markdown-viewer/ckkdlimhmcjmikdlpkmbgfkaikojcbjk) for viewing Markdown files with its styling. Install it and open the files with [Chrome](https://www.google.com/intl/tr/chrome/).
|
||||
|
||||
However, they are basically text files. You can also view them with any text editor such as Notepad on Windows, gedit on Linux or Text Editor on MacOS
|
||||
|
||||
## How can I change my credentials?
|
||||
- All of the user data is held in **config.json** file which is in a folder named "Bulk Downloader for Reddit" in your **Home** directory. You can edit
|
||||
them, there.
|
||||
71
script.py
71
script.py
@@ -23,7 +23,7 @@ from src.tools import (GLOBAL, createLogFile, jsonFile, nameCorrector,
|
||||
|
||||
__author__ = "Ali Parlakci"
|
||||
__license__ = "GPL"
|
||||
__version__ = "1.6.1"
|
||||
__version__ = "1.6.3"
|
||||
__maintainer__ = "Ali Parlakci"
|
||||
__email__ = "parlakciali@gmail.com"
|
||||
|
||||
@@ -184,9 +184,10 @@ def checkConflicts():
|
||||
else:
|
||||
user = 1
|
||||
|
||||
search = 1 if GLOBAL.arguments.search else 0
|
||||
|
||||
modes = [
|
||||
"saved","subreddit","submitted","search","log","link","upvoted",
|
||||
"multireddit"
|
||||
"saved","subreddit","submitted","log","link","upvoted","multireddit"
|
||||
]
|
||||
|
||||
values = {
|
||||
@@ -199,15 +200,18 @@ def checkConflicts():
|
||||
if not sum(values[x] for x in values) == 1:
|
||||
raise ProgramModeError("Invalid program mode")
|
||||
|
||||
if values["search"]+values["saved"] == 2:
|
||||
if search+values["saved"] == 2:
|
||||
raise SearchModeError("You cannot search in your saved posts")
|
||||
|
||||
if values["search"]+values["submitted"] == 2:
|
||||
if search+values["submitted"] == 2:
|
||||
raise SearchModeError("You cannot search in submitted posts")
|
||||
|
||||
if values["search"]+values["upvoted"] == 2:
|
||||
if search+values["upvoted"] == 2:
|
||||
raise SearchModeError("You cannot search in upvoted posts")
|
||||
|
||||
if search+values["log"] == 2:
|
||||
raise SearchModeError("You cannot search in log files")
|
||||
|
||||
if values["upvoted"]+values["submitted"] == 1 and user == 0:
|
||||
raise RedditorNameError("No redditor name given")
|
||||
|
||||
@@ -385,10 +389,7 @@ def prepareAttributes():
|
||||
|
||||
GLOBAL.arguments.link = GLOBAL.arguments.link.strip("\"")
|
||||
|
||||
try:
|
||||
ATTRIBUTES = LinkDesigner(GLOBAL.arguments.link)
|
||||
except InvalidRedditLink:
|
||||
raise InvalidRedditLink
|
||||
ATTRIBUTES = LinkDesigner(GLOBAL.arguments.link)
|
||||
|
||||
if GLOBAL.arguments.search is not None:
|
||||
ATTRIBUTES["search"] = GLOBAL.arguments.search
|
||||
@@ -418,7 +419,7 @@ def prepareAttributes():
|
||||
ATTRIBUTES["submitted"] = True
|
||||
|
||||
if GLOBAL.arguments.sort == "rising":
|
||||
raise InvalidSortingType
|
||||
raise InvalidSortingType("Invalid sorting type has given")
|
||||
|
||||
ATTRIBUTES["limit"] = GLOBAL.arguments.limit
|
||||
|
||||
@@ -455,6 +456,9 @@ def isPostExists(POST):
|
||||
|
||||
possibleExtensions = [".jpg",".png",".mp4",".gif",".webm",".md"]
|
||||
|
||||
"""If you change the filenames, don't forget to add them here.
|
||||
Please don't remove existing ones
|
||||
"""
|
||||
for extension in possibleExtensions:
|
||||
|
||||
OLD_FILE_PATH = PATH / (
|
||||
@@ -481,6 +485,8 @@ def isPostExists(POST):
|
||||
return False
|
||||
|
||||
def downloadPost(SUBMISSION):
|
||||
|
||||
"""Download directory is declared here for each file"""
|
||||
directory = GLOBAL.directory / SUBMISSION['postSubreddit']
|
||||
|
||||
global lastRequestTime
|
||||
@@ -563,7 +569,10 @@ def download(submissions):
|
||||
print(f" – {submissions[i]['postType'].upper()}",end="",noPrint=True)
|
||||
|
||||
if isPostExists(submissions[i]):
|
||||
print("\nIt already exists")
|
||||
print(f"\n" \
|
||||
f"{submissions[i]['postSubmitter']}_"
|
||||
f"{nameCorrector(submissions[i]['postTitle'])}")
|
||||
print("It already exists")
|
||||
duplicates += 1
|
||||
downloadedCount -= 1
|
||||
continue
|
||||
@@ -635,6 +644,12 @@ def download(submissions):
|
||||
print(" Total of {} links downloaded!".format(downloadedCount))
|
||||
|
||||
def main():
|
||||
|
||||
VanillaPrint(
|
||||
f" Bulk Downloader for Reddit v{__version__}\n" \
|
||||
f" Written by Ali PARLAKCI – parlakciali@gmail.com\n\n" \
|
||||
f" https://github.com/aliparlakci/bulk-downloader-for-reddit/"
|
||||
)
|
||||
GLOBAL.arguments = parseArguments()
|
||||
|
||||
if GLOBAL.arguments.directory is not None:
|
||||
@@ -643,6 +658,8 @@ def main():
|
||||
GLOBAL.directory = Path(input("download directory: "))
|
||||
|
||||
print("\n"," ".join(sys.argv),"\n",noPrint=True)
|
||||
print(f"Bulk Downloader for Reddit v{__version__}\n",noPrint=True
|
||||
)
|
||||
|
||||
try:
|
||||
checkConflicts()
|
||||
@@ -651,35 +668,21 @@ def main():
|
||||
|
||||
if not Path(GLOBAL.configDirectory).is_dir():
|
||||
os.makedirs(GLOBAL.configDirectory)
|
||||
GLOBAL.config = getConfig(GLOBAL.configDirectory / "config.json")
|
||||
GLOBAL.config = getConfig("config.json") if Path("config.json").exists() \
|
||||
else getConfig(GLOBAL.configDirectory / "config.json")
|
||||
|
||||
if GLOBAL.arguments.log is not None:
|
||||
logDir = Path(GLOBAL.arguments.log)
|
||||
download(postFromLog(logDir))
|
||||
sys.exit()
|
||||
|
||||
|
||||
try:
|
||||
POSTS = getPosts(prepareAttributes())
|
||||
except InsufficientPermission:
|
||||
print("You do not have permission to do that")
|
||||
sys.exit()
|
||||
except NoMatchingSubmissionFound:
|
||||
print("No matching submission was found")
|
||||
sys.exit()
|
||||
except NoRedditSupoort:
|
||||
print("Reddit does not support that")
|
||||
sys.exit()
|
||||
except NoPrawSupport:
|
||||
print("PRAW does not support that")
|
||||
sys.exit()
|
||||
except MultiredditNotFound:
|
||||
print("Multireddit not found")
|
||||
sys.exit()
|
||||
except InvalidSortingType:
|
||||
print("Invalid sorting type has given")
|
||||
sys.exit()
|
||||
except InvalidRedditLink:
|
||||
print("Invalid reddit link")
|
||||
except Exception as exc:
|
||||
logging.error(sys.exc_info()[0].__name__,
|
||||
exc_info=full_exc_info(sys.exc_info()))
|
||||
print(log_stream.getvalue(),noPrint=True)
|
||||
print(exc)
|
||||
sys.exit()
|
||||
|
||||
if POSTS is None:
|
||||
|
||||
@@ -23,8 +23,7 @@ def dlProgress(count, blockSize, totalSize):
|
||||
|
||||
downloadedMbs = int(count*blockSize*(10**(-6)))
|
||||
fileSize = int(totalSize*(10**(-6)))
|
||||
sys.stdout.write("\r{}Mb/{}Mb".format(downloadedMbs,fileSize))
|
||||
sys.stdout.write("\b"*len("\r{}Mb/{}Mb".format(downloadedMbs,fileSize)))
|
||||
sys.stdout.write("{}Mb/{}Mb\r".format(downloadedMbs,fileSize))
|
||||
sys.stdout.flush()
|
||||
|
||||
def getExtension(link):
|
||||
@@ -55,10 +54,11 @@ def getFile(fileDir,tempDir,imageURL,indent=0):
|
||||
"""
|
||||
|
||||
headers = [
|
||||
("User-Agent", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 " \
|
||||
"(KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11"),
|
||||
("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) " \
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 "\
|
||||
"Safari/537.36 OPR/54.0.2952.64"),
|
||||
("Accept", "text/html,application/xhtml+xml,application/xml;" \
|
||||
"q=0.9,*/*;q=0.8"),
|
||||
"q=0.9,image/webp,image/apng,*/*;q=0.8"),
|
||||
("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.3"),
|
||||
("Accept-Encoding", "none"),
|
||||
("Accept-Language", "en-US,en;q=0.8"),
|
||||
@@ -66,7 +66,8 @@ def getFile(fileDir,tempDir,imageURL,indent=0):
|
||||
]
|
||||
|
||||
opener = urllib.request.build_opener()
|
||||
opener.addheaders = headers
|
||||
if not "imgur" in imageURL:
|
||||
opener.addheaders = headers
|
||||
urllib.request.install_opener(opener)
|
||||
|
||||
if not (os.path.isfile(fileDir)):
|
||||
@@ -102,6 +103,8 @@ class Erome:
|
||||
|
||||
extension = getExtension(IMAGES[0])
|
||||
|
||||
"""Filenames are declared here"""
|
||||
|
||||
title = nameCorrector(post['postTitle'])
|
||||
print(post["postSubmitter"]+"_"+title+"_"+post['postId']+extension)
|
||||
|
||||
@@ -237,8 +240,11 @@ class Imgur:
|
||||
post['mediaURL'] = content['object'].link
|
||||
|
||||
post['postExt'] = getExtension(post['mediaURL'])
|
||||
|
||||
|
||||
title = nameCorrector(post['postTitle'])
|
||||
|
||||
"""Filenames are declared here"""
|
||||
|
||||
print(post["postSubmitter"]+"_"+title+"_"+post['postId']+post['postExt'])
|
||||
|
||||
fileDir = directory / (
|
||||
@@ -297,6 +303,8 @@ class Imgur:
|
||||
+ "_"
|
||||
+ images[i]['id'])
|
||||
|
||||
"""Filenames are declared here"""
|
||||
|
||||
fileDir = folderDir / (fileName + images[i]['Ext'])
|
||||
tempDir = folderDir / (fileName + ".tmp")
|
||||
|
||||
@@ -393,12 +401,17 @@ class Gfycat:
|
||||
except IndexError:
|
||||
raise NotADownloadableLinkError("Could not read the page source")
|
||||
except Exception as exception:
|
||||
#debug
|
||||
raise exception
|
||||
raise NotADownloadableLinkError("Could not read the page source")
|
||||
|
||||
POST['postExt'] = getExtension(POST['mediaURL'])
|
||||
|
||||
|
||||
if not os.path.exists(directory): os.makedirs(directory)
|
||||
title = nameCorrector(POST['postTitle'])
|
||||
|
||||
"""Filenames are declared here"""
|
||||
|
||||
print(POST["postSubmitter"]+"_"+title+"_"+POST['postId']+POST['postExt'])
|
||||
|
||||
fileDir = directory / (
|
||||
@@ -453,6 +466,9 @@ class Direct:
|
||||
POST['postExt'] = getExtension(POST['postURL'])
|
||||
if not os.path.exists(directory): os.makedirs(directory)
|
||||
title = nameCorrector(POST['postTitle'])
|
||||
|
||||
"""Filenames are declared here"""
|
||||
|
||||
print(POST["postSubmitter"]+"_"+title+"_"+POST['postId']+POST['postExt'])
|
||||
|
||||
fileDir = directory / (
|
||||
@@ -475,6 +491,9 @@ class Self:
|
||||
if not os.path.exists(directory): os.makedirs(directory)
|
||||
|
||||
title = nameCorrector(post['postTitle'])
|
||||
|
||||
"""Filenames are declared here"""
|
||||
|
||||
print(post["postSubmitter"]+"_"+title+"_"+post['postId']+".md")
|
||||
|
||||
fileDir = directory / (
|
||||
@@ -494,7 +513,8 @@ class Self:
|
||||
|
||||
@staticmethod
|
||||
def writeToFile(directory,post):
|
||||
|
||||
|
||||
"""Self posts are formatted here"""
|
||||
content = ("## ["
|
||||
+ post["postTitle"]
|
||||
+ "]("
|
||||
|
||||
@@ -67,7 +67,7 @@ class NoMatchingSubmissionFound(Exception):
|
||||
class NoPrawSupport(Exception):
|
||||
pass
|
||||
|
||||
class NoRedditSupoort(Exception):
|
||||
class NoRedditSupport(Exception):
|
||||
pass
|
||||
|
||||
class MultiredditNotFound(Exception):
|
||||
|
||||
@@ -29,7 +29,7 @@ def LinkParser(LINK):
|
||||
ShortLink = False
|
||||
|
||||
if not "reddit.com" in LINK:
|
||||
raise InvalidRedditLink
|
||||
raise InvalidRedditLink("Invalid reddit link")
|
||||
|
||||
SplittedLink = LINK.split("/")
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ from prawcore.exceptions import NotFound, ResponseException, Forbidden
|
||||
|
||||
from src.tools import GLOBAL, createLogFile, jsonFile, printToFile
|
||||
from src.errors import (NoMatchingSubmissionFound, NoPrawSupport,
|
||||
NoRedditSupoort, MultiredditNotFound,
|
||||
NoRedditSupport, MultiredditNotFound,
|
||||
InvalidSortingType, RedditLoginFailed,
|
||||
InsufficientPermission)
|
||||
|
||||
@@ -48,6 +48,7 @@ def beginPraw(config,user_agent = str(socket.gethostname())):
|
||||
|
||||
self.client = self.recieve_connection()
|
||||
data = self.client.recv(1024).decode('utf-8')
|
||||
str(data)
|
||||
param_tokens = data.split(' ', 2)[1].split('?', 1)[1].split('&')
|
||||
params = {
|
||||
key: value for (key, value) in [token.split('=') \
|
||||
@@ -93,6 +94,7 @@ def beginPraw(config,user_agent = str(socket.gethostname())):
|
||||
reddit = authorizedInstance[0]
|
||||
refresh_token = authorizedInstance[1]
|
||||
jsonFile(GLOBAL.configDirectory / "config.json").add({
|
||||
"reddit_username":str(reddit.user.me()),
|
||||
"reddit_refresh_token":refresh_token
|
||||
})
|
||||
else:
|
||||
@@ -102,6 +104,7 @@ def beginPraw(config,user_agent = str(socket.gethostname())):
|
||||
reddit = authorizedInstance[0]
|
||||
refresh_token = authorizedInstance[1]
|
||||
jsonFile(GLOBAL.configDirectory / "config.json").add({
|
||||
"reddit_username":str(reddit.user.me()),
|
||||
"reddit_refresh_token":refresh_token
|
||||
})
|
||||
return reddit
|
||||
@@ -115,7 +118,7 @@ def getPosts(args):
|
||||
reddit = beginPraw(config)
|
||||
|
||||
if args["sort"] == "best":
|
||||
raise NoPrawSupport
|
||||
raise NoPrawSupport("PRAW does not support that")
|
||||
|
||||
if "subreddit" in args:
|
||||
if "search" in args:
|
||||
@@ -144,8 +147,8 @@ def getPosts(args):
|
||||
}
|
||||
|
||||
if "search" in args:
|
||||
if args["sort"] in ["hot","rising","controversial"]:
|
||||
raise InvalidSortingType
|
||||
if GLOBAL.arguments.sort in ["hot","rising","controversial"]:
|
||||
raise InvalidSortingType("Invalid sorting type has given")
|
||||
|
||||
if "subreddit" in args:
|
||||
print (
|
||||
@@ -169,16 +172,16 @@ def getPosts(args):
|
||||
)
|
||||
|
||||
elif "multireddit" in args:
|
||||
raise NoPrawSupport
|
||||
raise NoPrawSupport("PRAW does not support that")
|
||||
|
||||
elif "user" in args:
|
||||
raise NoPrawSupport
|
||||
raise NoPrawSupport("PRAW does not support that")
|
||||
|
||||
elif "saved" in args:
|
||||
raise NoRedditSupoort
|
||||
raise ("Reddit does not support that")
|
||||
|
||||
if args["sort"] == "relevance":
|
||||
raise InvalidSortingType
|
||||
raise InvalidSortingType("Invalid sorting type has given")
|
||||
|
||||
if "saved" in args:
|
||||
print(
|
||||
@@ -243,7 +246,7 @@ def getPosts(args):
|
||||
) (**keyword_params)
|
||||
)
|
||||
except NotFound:
|
||||
raise MultiredditNotFound
|
||||
raise MultiredditNotFound("Multireddit not found")
|
||||
|
||||
elif "submitted" in args:
|
||||
print (
|
||||
@@ -273,7 +276,7 @@ def getPosts(args):
|
||||
reddit.redditor(args["user"]).upvoted(limit=args["limit"])
|
||||
)
|
||||
except Forbidden:
|
||||
raise InsufficientPermission
|
||||
raise InsufficientPermission("You do not have permission to do that")
|
||||
|
||||
elif "post" in args:
|
||||
print("post: {post}\n".format(post=args["post"]).upper(),noPrint=True)
|
||||
@@ -385,7 +388,7 @@ def redditSearcher(posts,SINGLE_POST=False):
|
||||
print()
|
||||
return subList
|
||||
else:
|
||||
raise NoMatchingSubmissionFound
|
||||
raise NoMatchingSubmissionFound("No matching submission was found")
|
||||
|
||||
def checkIfMatching(submission):
|
||||
global gfycatCount
|
||||
|
||||
Reference in New Issue
Block a user