eCommerce API Testing, Reporting & AEM

API Integration benefits for eCommerce
APIs aid in the integration of several platforms, i.e., one can connect ones E-commerce website to a shipping provider’s account and import both order and shipment data. This way, one’s shipping activities can be centralized on a single platform. With features including order fulfillment, courier management, labeling, invoicing, printing package information, tracking, and confirmation alerts, e-commerce fulfillment APIs may help organizations automate, coordinate, manage, and streamline their operations. So, if anyone wants to build a robust platform for one’s E-commerce firm, one must incorporate an API into one’s shipping and logistics software solution. Why use APIs? The answer is, for its extensibility, increased security, reusability, scalability & synchronization.
Managed data on eCommerce needs to be integrated with other systems in order to fulfill eCommerce goals. For integration eCommerce serves a few APIs & through these APIs, data is shared for hosting data on other systems. Here a Tester is needed to be involved for testing those APIs which contains valuable data to ensure proper communication between systems. API-led integration can enhance how people engage with new devices and technological developments. Multichannel selling, inventory management, shipping, and generating tailored experiences across several channels can all help to improve E-commerce fulfillment processes.
Since API tests are considered to be at the service level (integration) in Mike Cohn’s renowned Test Pyramid, at least 20% of all of our tests should be API-focused. API tests are quick, have a high return on investment, and make it easier to validate business logic, security, compliance, and other application-related features. When the API is out to the public and allows end users programmatic access to our services or application, API tests effectively transform into end-to-end tests and should cover the entire user story. The significance of API testing is therefore clear. How to test APIs is made easier by a number of techniques and tools, including automated testing, manual testing, test environments, libraries, frameworks, and tools. However, before creating any test methods, you must decide what to test. This is true regardless of the technologies you use, such as Postman, JMeter, pytest, mocha, RestAssured, supertest, Jasmine or any other tools of the trade.
What to test? API test strategy
The test strategy is a summary of the test requirements that may later be used to create a complete test plan that specifies specific test cases and test scenarios. A contract between a client and a server, or between two apps, is essentially what an API is. It is crucial to verify that the contract is accurate before starting any implementation tests. To begin, check the specification to ensure that endpoints are correctly named, resources and their types accurately reflect the object model, there are no functional gaps or duplications, and relationships between resources are accurately reflected in the API.
Each test is made up of test actions. These are the particular actions that a test must perform in accordance with the API test flow. The test would be expected to perform the following tasks for each API call:
- Check the correct HTTP status code: For example, for a successful response should return 200 OK and invalid client-side requests should return 400 level of error and 500 level for server-side errors etc.
- Check the response payload: Check for a proper JSON body as well as correct types, field names and values — even in error responses.
- Check the response headers: HTTP server headers affect both performance and security.
- Check the correct application state: This is only used for manual testing or when a UI or the other interface can be easily viewed.
- Check the basic performance soundness: The test fails if an operation was finished correctly but took an unacceptable length of time.
Now let us explore some of the test scenario types that can be tested.
- Happy path tests,
- With valid inputs some negative tests,
- With invalid inputs some negative tests,
- With optional parameters extended happy paths,
- Monkey testing combined with boundary value analysis (attempts to break APIs),
- Permission, security and authorization tests,
- Isolated test requests,
- Several requests with multiple steps combined,
- Web UI (in our scope it’s AEM) tests taking API responses as expected data,
- Chaining responses and requests, etc.
Now let’s move into some real-life eCommerce API testing, creating test reports and how we test data managed on AEM after API integrations with the system.
API tests in Action
First, we need to pick a tool, for that here we are picking Postman. We also need to pick an eCommerce API collection (provides responses similar to eCommerce APIs) for demonstrating the tasks and scenarios.

Now we are ready to start testing these APIs. Let’s assume these APIs/endpoints executes the following actions:

Execution of Test Scenarios
Happy path test:
- Validate status code: Let’s execute API requests with required valid parameters. All calls should return HTTP status code 2XX

Here we can see API request returned response with a Status code 200 OK for performing GET request.
- Payload validation: From ‘Figure: Happy Path Test’ we can see from response the JSON object we receiving is well-formed. The response structure is determined by the data model.
- State validation: Check that there is NO STATE CHANGE in the system for GET requests.
- Header validation: Check that HTTP headers, such as connection, content-type, expires, access-control-allow-origin, cache-control, HSTS, keep-alive, and other standard header fields, are as intended. Check that no information is leaked via headers, for example, the X-Powered-By header is not provided to the user.

- Sanity Test to measure performance: Response is achieved within a realistic expected time frame as indicated in the test plan. On Figure ‘Figure: Happy Path Test (2)’ we can see the expected response time.
With valid inputs some negative test: Suppose we have a few API endpoints with POST/PUT/DELETE requests as all we have in our collection is GET requests. Let’s assume we execute these following actions to get expected results:
- Try to build a resource with an already existing name
- Try to delete a non-existent resource
- Try to update a resource with invalid data, etc.
We should expect to receive the following results:
- Validate status code: Check that an incorrect HTTP status code (NOT 2XX) is provided. Check that the HTTP status code matches the error condition as described in the specification.
- Payload validation: Check to see if an error answer was received. Check that the error format is correct. For example, error can be a legitimate JSON object or a plain string (as described in the specification). Check for a clear, descriptive error message/description field. Check that the error description is correct for this error case and in agreement with the specification.
- Header validation: As described in earlier section.
- Sanity Test to measure performance: Ensure that errors are reported in a timely way.
With invalid inputs some negative test: Suppose we have a few API endpoints with POST/PUT/DELETE requests as all we have in our collection is GET requests. Let’s assume we execute these following actions to get expected results (Use incorrect input to execute API calls):
- Invalid or missing authorization token
- Required parameters that are missing
- Endpoint parameters have an invalid value
- Incorrect modelid in the path or query parameters
- Invalid model payload
- Payload with an unfinished model
- Invalid values in entity fields that are nested
- HTTP headers with invalid values
- Endpoint methods that are not supported, etc.
We should expect to get results as described in aforementioned section.

- With optional parameters extended happy path: Perform an API call with both valid necessary and additional parameters. Carry out the same tests as in #1, but this time include the endpoint’s optional parameters. Let’s assume we execute these following actions to get expected results as aforementioned in Happy Path tests.
- Monkey testing combined with boundary value analysis: In order to test the API’s resilience, intentionally fail it, incorrect request content. Incorrect payload content type, incorrect payload structure, and overflow parameter values E.g., try to create a user configuration with a title longer than 200 characters, try to GET a user with an invalid modelid that is 1000 characters long, overflow payload — huge JSON in request body, boundary value testing, empty payloads, empty sub-objects in payload, illegal characters in parameters or payload, incorrect HTTP headers (e.g., Content-Type), small concurrency tests — concurrent API calls that write to the same resources.
- Permission, security and authorization test:
- Verify that the API is built using the correct security principles, such as deny-by-default, fail securely, the least privilege principle, rejecting any illegal inputs, and so on.
- Positive: guarantee that the API responds to correct authorization via all agreed-upon authentication methods (bearer token, cookies, digest, etc.) as described in the specification.
- Negative: make sure the API rejects all unauthorized calls.
- Role Permissions: guarantee that particular endpoints are accessible to users based on their role. API should reject requests to endpoints that are not authorized for the user’s role.
- Protocol: verify HTTP/HTTPS compliance
- Data leaks: guarantee that internal data representations that should remain internal do not leak outside the public API in response payloads.
- Policies for rate limiting, throttling, and access control
- Performance, Load Test, Stress Tests: Check API response time, latency, TTFB/TTLB in various scenarios (in isolation and under load). Determine capacity limits and ensure that the system functions as intended under load and fails gracefully when stressed.
- Web UI (in our scope it’s AEM) tests taking API responses as expected data: If API is integrated with AEM system and data is displayed on published instance then tester must compare API response data with data shown on AEM published instance.
- Chaining responses and requests: Chaining responses and requests is the simplest thing in API testing. Suppose, a value from a response is used for a request as a parameter. Here you can simply save those value as environment variable and pass those to parameters while making requests.
Step 1: Save value of a key as environment variable after parsing the response body. Suppose, we have a response from our /search endpoint and we want to save “id” key’s value to an environment

We just have to write a test as given in the following figure.

Step 2: Simply add the saved environment variable to next request’s parameter as shown in the following figure.

As the process is described you can chain other responses and requests as per business logic and requirements
Let’s add a few Tests using Postman
Suppose we want to skip manual tests and execute the test cases using postman scripts. Let’s think, we want to add scripts for API testing task described before in this article which are:
- Check the correct HTTP status code
- Check the response payload
- Check the response headers
- Check the correct application state
- Check the basic performance soundness
To test these just add the script shown in the following figure.

Just add these scripts to your requests and run the collection to generate report from Postman.

You can select your desired iteration limit, delay and if you want to read data from an external file then you can do it from collection runner and to save responses for report you can just check the save responses checkbox.

To generate HTML report, you can simply install Newman run the following command on Command Prompt CLI.
To run the command first export your collection from Postman and install newman HTML reporter.

The report should look like the following image.

There are other ways to generate reports using API keys and automate API collection run using Jenkins or other CI/CD tools. That knowledge can be shared in some other article. This is it for now. Happy Testing.
Connect with me on LinkedIn: https://www.linkedin.com/in/minhazbillah