8 Answers
Reset to default
6
From Mediafires Terms of Service:
General Use of the Service, Permissions and Restrictions
You agree while using MediaFire Services, that you may not:
Alter or modify any part of the Services;
Use the Services for any illegal purpose;
Use any robot, spider, offline readers, site search and/or retrievalapplication, or other device to retrieve or index any portion of theServices, with the exception of public search engines
So essentially by using anything other than the tools that Mediafire provide via their website you are in fact breaking their terms of service.
Improve this answer
edited Jun 12, 2020 at 13:48
CommunityBot
1
answered Jun 12, 2012 at 1:24
blendmaster345blendmaster345
6111 silver badge22 bronze badges
1
-
4
But if
wget
counts as a "retrieval application" then a browser does too... I think they're talking about things that crawl the whole site–Heath Mitchell
Commented Aug 18, 2020 at 19:16
Add a comment |
5
Actually it can be done. What you have to do is:
- Go to the link like you're going to download to your computer
- When the "download" button comes up, "right-click" and copy the link and add that to your
wget
.
It'll be something like
wget http://download85794.mediafire.com/whatever_your_file_is
Improve this answer
edited Feb 28, 2014 at 22:28
Kevin Panko
7,4362323 gold badges4646 silver badges5353 bronze badges
answered Feb 28, 2014 at 19:36
T JonesT Jones
5111 silver badge11 bronze badge
2
-
That's right! it works this way
–A.Essam
Commented Apr 18, 2016 at 22:09
-
2
Does not work anymore sadly, they just serve the .html
–Luc H
Commented Sep 19, 2020 at 15:39
Add a comment |
5
bash function:
mdl () {url=$(curl -Lqs "$1"|grep "href.*download.*media.*"|tail -1|cut -d '"' -f 2)aria2c -x 6 "$url" # or wget "$url" if you prefer.}
Example:
$ sudo apt install aria2
$ mdl "http://www.mediafire.com/?tjmjrmtuyco"01/14 13:58:34 [NOTICE] Downloading 1 item(s)
38MiB/100MiB(38%) CN:4 DL:6.6MiB ETA:9s]
Improve this answer
answered Jan 14, 2020 at 12:00
ZibriZibri
28122 silver badges1010 bronze badges
Add a comment |
4
I've never tried myself, but there are a few things you could try to "cheat" the website.
For example --referer
will let you specify a referer URL - maybe the site expects you to come from a specific "home" page or something: with this option wget will pretend it's coming from there.
Also, --user-agent
will make wget "pretend" it's a different agent - namely, a browser like Firefox.
--header
will let you forge the whole HTTP request to mimic that of a browser.
If none of those work, there are also more options, dealing with cookies and other advanced settings: man wget
for the whole list.
I hope this helps a bit: if you succeed, please post how you did it!
Improve this answer
answered Jul 19, 2011 at 12:13
MacThePenguinMacThePenguin
50844 silver badges88 bronze badges
Add a comment |
4
Mediafire now allows to download from the IP you have requested. So 1st you need to download the page using following command
curl -O "http://www.mediafire.com/file/6ddhdfg/db.zip/file"
Once the file is downloaded, find the URL inside the file like
http://download*.mediafire.com/*
and then use the command wget to download the file
wget http://download*.mediafire.com/*
P.S. * varies downloads to downloads. so you need to find that exact value.
Improve this answer
answered May 22, 2020 at 6:19
B MB M
4111 silver badge22 bronze badges
Add a comment |
2
Sites like this use multiple methods to prevent simple/automated downloading. A few examples of such techniques include:
- Using sessions
- Generating unique download links/keys
- Using CAPTCHAS (can be defeated, but certainly not by wget)
- Timers for non-premium users to delay the download
- IFrames containing the download link
- Providing the link from another site/domain
- Checking the web client (is it a web browser or something else)
- Checking referer to prevent hotlinking (did the download request come from the site or elsewhere)
- Checking the headers to verify it conforms to their expectations
- Using PUT instead of GET to use "hidden" form fields
- Setting and checking cookies
- Using JavaScript to redirect or generate the download link
- Using Flash to test the user or generate the download link
Basically, downloading files from sites like this with tools like cURL or wget would at best, be difficult, and certainly not practical.
Improve this answer
edited Mar 1, 2014 at 1:30
Sopalajo de Arrierez
6,7631313 gold badges6767 silver badges100100 bronze badges
answered Jun 12, 2012 at 19:41
SynetechSynetech
69.2k3939 gold badges229229 silver badges363363 bronze badges
Add a comment |
Right click the download button, "copy link address"
wget (url)
Easy as that, just did it.
Improve this answer
answered Jul 20, 2018 at 11:23
AnonymousUserAnonymousUser
1
Add a comment |
Most of the other anwsers dont seem to work anymore. But ive found another trick that did work for me.
Using chrome as a example, but this should work on all chromium-based browsers:
Open f12 and go to the network tabclick the download button.find the newly downloading file in the network tab and while that's downloading rightclick it, go to copy and then copy as curl (bash). If you do this while the download is active on your browser and execute this on the commandline (in this case the bash shell) then it will download correctly.
You should pipe this output to a file with curl x > outputfile
so you don't everything thrown at your terminal session but instead save to a file.
Improve this answer
edited Sep 19, 2020 at 20:05
answered Sep 19, 2020 at 15:46
Luc HLuc H
10133 bronze badges
Add a comment |