Analyzing API Endpoints

hackysterio
16 min readJul 27, 2023

--

Unlock the world of API Hacking with our upcoming series, tailor-made for beginners looking to become API security experts

We are currently developing a comprehensive guide that will be released in its entirety by September 2023. In the meantime, I’m thrilled to share a sneak peek of what’s to come.

This article is just a taste of the simple yet powerful insights we’ll be offering in our full write-up. Here, I break down how to Analyze API Endpoints, making it easy for anyone to understand and apply.

Whether you’re new to the field or a seasoned pro looking to brush up on your skills, our upcoming API Hacking series has something for everyone.

API endpoints are like doors that let different parts of a computer program talk to each other. Imagine you want to get some information or do something in a program, like getting your profile information or posting a comment on a social media platform. The program needs to know where to go to get that information or do that action, right? That’s where API endpoints come in.

API endpoints are like target locations inside the program where you can find the information or perform the action you want. When you want to get the information or do something, you send a request to the particular endpoint that is responsible for that task, and in return the API endpoint sends back a response. It’s like knocking on the door and asking for what you need, and having it given to you.

These endpoints are really important because they make sure the program knows where to find the things you want and how to get them. They help the different parts of the program communicate and work together smoothly. So, API endpoints are like the doors that allow different parts of a program to talk and share information with each other.

Endpoints are usually named based on what the functionalities are.

You should understand what an API endpoint is and how to identify them.

It’s time to spin up the crAPI software that you installed earlier. We’ll be analyzing the API endpoints on our crAPI server.

The link to the crAPI github repository was provided in previous blog posts, i believe by now you must have installed it.

You must have installed crAPI by now, it’s time we spinned it up and let’s use it. Navigate to the folder where you have crAPI and run the following command (assuming you have setup crAPI): “sudo docker-compose -f docker-compose.yml — compatibility up -d”

*I’ll walk you through step-by-step on how to analyze API Endpoints like a complete beginner….

After this, you should be able to access crAPI through localhost. For the kali homepage, navigate to: “ http://127.0.0.1:8888” while for crAPI MailHog server, navigate to: “http://127.0.0.1:8025”. Note, I’m using the firefox browser.

You should see something like this

On the crAPI login page, click on the “SignUp” button and fill the credentials like img below:

But before you hit the sign up button, right click on the screen and select “Inspect(Q)” this opens up the browser’s built-in Developer Tools which enables us to inspect elements and even modify codes and also analyze network requests.

Notice it’s on the “Inspector” tab, now click on the “Network” tab. This is because all the network requests we’re going to be making is going to be recorded here, this includes API requests.

Click on the “Network” tab and click on the SignUp button. You’ll see the following response

*Next step is to spin up Postman so we can start to discover more API endpoints

First, create a collection called crAPI

Then click “New” as highlighted above to create a new HTTP request

Next is to paste the URL you copied in img here, also take note of the king of HTTP request, in this case it was a POST request, so change this to a POST request

You’ll get a status code “400 Bad Request” this means that the server cannot process our(client) request due to an error in the request itself.

Let’s review our request tab and check if we’re missing any required parameter

Let’s look at the raw JSON data, so we can copy and paste it in the body of our request on postman as it’s the parameter that was missing hence the “400 Bad Request”

Go to the “Body” tab, click on “raw”, change the “Text” to “JSON” like i highlighted below and paste the URL, then hit “Send”

After this, you’ll get the following response:

As you can see, it generates a token. This is our access token or authentication token and we’ll be needing it in subsequent interactions with the server. It also defines the type of token generated(it’ll be useful as we move ahead).

What we’ll do is to define this access token that has been generated as a variable and add it to our crAPI collection.

Define the variable as “access token”, paste the access token you got in img — here as the current value, but leave the initial value as blank.

Come back to the “Authorization” tab, you’ll see the token variable has been defined. Click on save, and save it to the crAPI collection we’re working with.

The reason for Authorization is: (complete later)

Click on the crAPI collection, and save.

Notice how this particular endpoint is under the crAPI collection.

NOW, LET US ANALYZE ANOTHEER API ENDPOINT, the same steps and procedure as the first one will be followed

We’ll copy the URL, then create a new HTTP Request in Postman

Then you paste it where the arrow is indicated, click on “Save”

save it to the crAPI collection like before

after hitting “Send” you should see a response. Also notice it has been saved to the crAPI collection, just like the first endpoint. You’ll even notice that the credit available to us is still 100. i.e. we’ve not spent any money yet

You need to play around the crAPI webpage, use the functionalities there, and try to collect more endpoints

you should have collected many endpoints, just like mine

Now let’s analyze the “products” endpoint. We’ll follow the same procedure like we have been doing for the previous ones

Copy the url, create a new HTTP Request in Postman, paste it, then analyze it. Also dont forget to save it to the collection you’re working with, in this case crAPI

Let’s try to buy something, the dev tool will capture it, and we’ll analyze it in Postman

Same methodology…

But note that this is a POST request, when we’re sending it to Postman, we need to fill in the right HTTP method, otherwise we’ll get an error

when I filled in a GET request instead of a POST request, it gave Error 500 which means an Internal Server Error i.e the server encountered something unusual while processing a request. If we use the correct HTTP method (POST) in this case, this error will be corrected

But it says “This field is required”

We’ll have to go back to the firefox dev tool, and look at the request tab, if there is any request we need to add

And of course, there is.

What we need to do like we’ve done earlier is to copy this JSON and paste in in the “Body” tab on Postman

You’ll notice that our credit has reduced to 80.0 from 100.0

Finally, you can rename all the url endpoints in Postman, for easy identification:

=========================================================

==================================================================

Let’s dive into more technical methods of analyzing API endpoints.

In a case where we don’t have the API documentation available to us, what we do is called “Reverse Engineering”

Reverse engineering an API is a process of trying to understand how a program or an API works by analyzing its interactions, and working backwards to principles and technologies used to create it.

In other words, we’ll be creating a collection of the API ourselves, since we don’t have documentation for it.

There are two methods to reverse engineer an API, the first method is to do it manually.

The second method of reverse engineering or building out a collection of API is doing it automatically.

For the first method, which is to manually build out a collection of API request, there are 2 ways of doing this: the first way is what we’ve done in an earlier module, we inspected every request that was of concern to us via the “inspect” element on our firefox browser, then analyze each HTTP request in Postman.

The second method of building API request collection manually is to proxy the entire web traffic on our browser through Postman. This allows us to capture many more requests. It is easier to build out a collection this way than the first method. I’ll show the demonstration below:

Below the collection we created in the previous module “crAPI demo”, we’ll create a new collection. Let’s call it “postman proxy”

To proxy this traffic via postman, we’ll need to turn on the “Capture requests” option in postman as highlighted below

Next is to fill up the fields required:

Set the port to “5555”. We configured our postman proxy to listen on port 5555 on our firefox browser.

If you’re hosting crAPI locally, your URL section will be like the above.

Or if you’re using the publicly hosted crAPI at “http://crapi.apisec.ai” your URL section will be like this.

Preferably, you should use the publicly hosted crAPI because the locally hosted one on your machine might have some errors and you will need troubleshooting it from time to time.

After you have completed all the explanations above, your Postman should look like this:

Next is to go to your browser and turn on the postman proxy listening on port 5555

After all this is set, next step is to go to the crAPI website and use all functionalities possible so that Postman can aggregate all the traffic

Here, I’m signing up

You can see, Postman has already captured the request

You should also try to purchase a vehicle

After you “Click here” to try to purchase a new vehicle, you should cgeck the crAPI Mailhog server at “http://crapi.apisec.ai:802” or “http://127.0.0.1:8025” if you’re hosting crAPI locally on your computer.

This is the deetails of the vehicle on your crAPI mail server, copy and fill the details on your crAPI homepage and you’ll successfully purchase a vehicle

This is the vehicle i purchased

After playing around with all the functionalities of the webpage, you can stop capturing requests.

I’ll advise you try out more functionalities than I did above, try to change your password, email, upload video, pictures. Etc explore the functionalities of the webpage to the fullest, this will give you more attack surface to work with.

This is the overview of the requests we captured from our browser that was automatically captured by Postman via the proxy we configured

Now, navigaate to the “Requests” tab like I highlighted with the circle below, and select only the endpoints with API requests. You’ll notice I didn’t select the login endpoint because it is not an API request

Now save this to the “postman proxy” collection we’re working with currently.

this is the result of the requests we gathered, you can see they’re not much because I didn’t explore the functionalities of the web app to the fullest.

The next thing we’ll do is arrange the endpoints we gathered into folders according to their properties. i.e from the endpoints we gathered we can see thee are 3 different properties, namely: identity, workshop, & community. So, we’ll create a folder for each and arrange the different endpoints into the folder that corresponds to it.

As you can see below, I’ve successfully arranged all the endpoints into folders.

The steps above is another way in which we can manually build out our collection on postman. You can see these two methods that we have used so far is not sustainable for very large API attack surface.

Now, let’s try to build our API collection automatically and compare it to the previous ones we’ve built.

To do this follow the following steps:

** First of all, “Man-In-The-Middle web” (mitmweb) is a tool you must have installed in the first module of this series. It’s time we used it. MITMweb acts as a web proxy, similar to the well-known tool Burp Suite. It serves as a user-friendly interface for Mitmproxy, an open-source and a very versatile tool designed to intercept, inspect, modify, and replay web traffic. While Mitmproxy operates from the command-line, MITMweb offers a seamless and intuitive graphical interface to interact with Mitmproxy’s capabilities. **

Enter the command “mitmweb” in your terminal

It’ll show you this GUI. notice that it’s listening on port 8080

Set your foxyproxy to listen at port 8080

As usual, start to test the functionality of the webb app, by signing up, logging in. etc

I uploaded a video, and i changed my display picture

I tried to purchased a vehicle by performing the action “Click here”

The details of the vehicle being sent to my crAPI Mailhog server at “http://crapi.apisec.ai:8025”. Yours could be “http://127.0.0.1:8025” if you’re hosting it locally

I imputed the details in the mail here

And voila, my car is purchased

This is the mitmweb interface, as you can see, all the actons we’ve been performing are being aggregated here

After you’re satisfied with trying out ALL functionalities of the web app, you can turn off the proxy

Don’t forget to dig VERY DEEP into the functionalities. Change passwords too etc

Next is to click on “File” then click “Save”. This will download a flows file, which contains all the captured network traffic

The downloaded flows file

Go back to the terminal where you launched the “mitmweb” command and press “Ctrl + C” this will stop mitm

Enter this command next: “sudo mitmproxy2swagger -i /Downloads/flows -o swagger.yml -p http://crapi.apisec.ai -f flow”

The command above will create a “swagger.yml” file

NB: replace the “/Downloads/flows” part of the command with the path where the flows file was downloaded on your PC just like I did with “/home/hackysterio/Downloads/flows” for mine.

** Mitmproxy2swagger is also a tool you must have installed in the earlier part of this series. It is a tool that automatically converts captures from MITMweb into OpenAPI 3.0 specifications. With “mitmproxy2swagger,” the process of documenting APIs becomes more streamlined and less manual. By analyzing the captured network traffic, the tool can extract relevant information about API endpoints, request parameters, response formats, and other details needed to generate comprehensive OpenAPI 3.0 specifications (formerly known as Swagger specifications). **

We’ll need to edit the contents of this file:

The “”ignore:” in front of all the API Endpoints of interest should be deleted. Just like I’ve done below

You can change the title of the document too. I changed mine to “crAPI_Swagger_file”

Notice that all requests from teh mailhog server on port 8025 were also ignored. i.e. the “ignore:” in front of them was not edited out

Save your file and enter the command below: “sudo mitmproxy2swagger -i /Downloads/flows -o swagger.yml -p http://crapi.apisec.ai -f flow — examples”

The “ — examples” at the end of this command will add example data to requests and responses thereby enhancing the API documentation.

Navigate to the website: “https://editor.swagger.io” and click on “Import file,” and import the swagger file you just edited here

** The Swagger Editor is a free tool for creating APIs. It helps you design, document, and test APIs easily. You can use it to analyze endpoints by checking their URLs, supported methods, input data, and responses. **

This is after i have imported my file

These are the endpoints it collated, you can start analyzing endpoints from here.

I clicked on one of the endpoints to analyze it

Next step is to come to Postman to import our file. Click on “New” and create a new folder.

Import the crAPI_Swagger_File we created

As you can see, it has appeared in postman alongside our crAPI and postmanproxy collections

You can see it is well arranged into folders and subfolders.

--

--

hackysterio

hAPI Hacker || Technical Writer || Tech Tutor || "Pain from discipline is better than pain from regret"