A HAR (HTTP Archive) file is a JSON-formatted log of a browser session’s network traffic. Every modern browser’s DevTools, plus proxy tools like Charles, Fiddler, and mitmproxy, can export a HAR. Requestly imports a HAR as a fully editable collection of requests - useful for replaying a captured flow, building an API collection from a real session, or sharing a reproducible bug report.Documentation Index
Fetch the complete documentation index at: https://docs.requestly.com/llms.txt
Use this file to discover all available pages before exploring further.
How to Import a HAR File

Capture a HAR file
In Chrome or Edge, open DevTools → Network, reproduce the flow you want to capture, then right-click any row and choose Save all as HAR with content. Firefox and Safari offer the same export from their network panels. Charles Proxy, Fiddler, and mitmproxy all support HAR export from their session menus.
Open the Import dialog
In the API Client, click the Import button in the top-left corner and choose HAR from the dropdown.
Select your .har file
Drag the file onto the upload area or click to browse. HAR files up to 30 MB are supported. Larger files are rejected before parsing - if your capture is bigger, narrow the recording window in DevTools and re-export.
Choose what to import
Requestly parses the file and shows a preview with two options:
- All requests - every captured request, including images, scripts, stylesheets, and fonts.
- Only API calls - JSON and XML APIs, form submissions, mutations, and CORS preflight requests. Static assets (images, CSS, JS, fonts) are skipped.
How HAR Entries Map to a Collection
- Root collection. Every import produces one collection at the root, named
HAR_Import_YYYY-MM-DD_HH-MM-SS. The timestamp keeps repeated imports distinct in the sidebar. - Sub-collections from
pages. If the HAR file includes alog.pagesarray (Chrome and Firefox both populate it on a per-tab navigation basis), each page becomes a sub-collection under the root, named after the page title. Requests are grouped under the page they belong to. Entries that don’t reference any page sit at the root level next to the sub-collections. - Requests and examples. Every HAR entry produces one request plus one example. The request is editable like any other Requestly request; the example holds the response that was captured (status, headers, body, timing) so you can replay against it without losing the original.
- Request body. JSON, form-urlencoded, and multipart bodies are detected from the captured
Content-Typeand opened in the right editor. Other bodies open in the raw editor with the appropriate syntax (HTML, XML, JavaScript, plain text). - Cookies. When the HAR’s structured
cookies[]array is present (Chrome and Firefox exports), it is the source of truth - the imported request gets a singleCookieheader built from those entries, and any duplicate rawCookieheader is dropped. Captures from proxy tools that only carry cookies in the raw header are passed through verbatim.
What’s Skipped, and Why
Some HAR entries can’t be imported as Requestly requests. The preview surfaces a warning when this happens:- WebSocket connections. Entries with a
ws://orwss://URL, or marked_resourceType: "websocket"by Chrome, are skipped - Requestly’s API Client doesn’t support WebSocket replay. - Binary request bodies. Some captures (analytics SDKs, gzip-compressed payloads) contain binary POST bodies with embedded null bytes. Requestly strips the null bytes so the body can be stored, and warns you that the imported request may not reproduce the original wire format byte-for-byte. The text-readable portion of the body is preserved.
log.entries array, the import fails before the preview step with an error explaining what’s wrong.
What’s Next?
Save to a Collection
Organize and rename the imported requests
Add Environment Variables
Replace hardcoded hosts and tokens with reusable variables
Run the Collection
Replay every captured request in sequence


