DebugBear API Documentation
Getting started
To get started, install the debugbear Node module and generate an API key. You'll also need to find the ID of the page you want to analyze.
Learn more about getting started.
Managing projects and pages
Read about managing projects and pages with the Node API.
Running a website test
You can either run a single test or trigger tests in bulk.
Create a script.js file like this:
const { DebugBear } = require("debugbear");
// TypeScript: import { DebugBear } from "debugbear"
const debugbear = new DebugBear(process.env.DEBUGBEAR_API_KEY);
const pageId = 185;
debugbear.pages.analyze(pageId).then((analysis) => {
analysis.waitForResult().then(() => {
console.log("Test complete, view results here: " + analysis.url);
});
});
Then run DEBUGBEAR_API_KEY=... node script.js.
Additional options
The Node module supports similar arguments to the CLI.
Building a particular commit
debugbear.pages.analyze(pageId, {
commitHash: "e2ba122",
buildTitle: "Add support for tags",
// infer additional details from the environment, for
// example the name of the current branch
inferBuildInfo: true,
});
Customizing the url and HTTP headers
debugbear.pages.analyze(pageId, {
url: "http://staging.com",
customHeaders: {
"X-Feature-Flags": "tags",
},
});
Passing in custom cookies
debugbear.pages.analyze(pageId, {
cookies: [
{
name: "testcookie",
value: "testvalue",
domain: "www.example.com",
path: "/",
},
],
});
Access build status and metrics
const analysis = await debugbear.pages.analyze(pageId, {
commitHash: "abc123",
});
const res = await analysis.waitForResult();
console.log(res);
This might log something like this:
{
"url": "https://www.debugbear.com/viewResult/787431",
"hasFinished": true,
"build": {
"status": "failure",
"oneLineSummary": "PF 100 ➔ 96, SEO 85 ➔ 80, Req# +1",
"metrics": {
"analysis.date": "2020-02-14T19:06:42.201Z",
"performance.speedIndex": 1087,
"performance.interactive": 895,
"performance.firstContentfulPaint": 845,
"performance.firstMeaningfulPaint": 1301,
"performance.score": 0.96,
"accessibility.score": 0.55,
"bestPractices.score": 0.79,
"seo.score": 0.8,
"pwa.score": 0.54,
"pageWeight.total": 1666205,
"pageWeight.document": 5745,
"pageWeight.stylesheet": 4562,
"pageWeight.image": 1571870,
"pageWeight.script": 65597,
"pageWeight.font": 18431,
"pageWeight.ajax": 0,
"pageWeight.media": 0,
"pageWeight.other": 0,
"pageWeight.redirect": 0,
"cpu.total": 42,
"cpu.scriptEvaluation": 8.4,
"console.errors": 0,
"console.warnings": 0,
"html.errors": 1,
"html.warnings": 1,
"mark.start": 120,
"mark.fully-rendered": 14125,
"measure.timeout": 14005,
"crux.granularity": "url",
"crux.fcp.p75": 906,
"crux.lcp.p75": 901,
"crux.cls.p75": 0,
"crux.fid.p75": 2
}
}
}
Status will be neutral if no performance budget has been set up, otherwise it will be either success or failure.
Using the HTTP API directly
You can trigger tests without using the Node module:
curl https://www.debugbear.com/api/v1/page/PAGE_ID/analyze \
-X POST \
-H "x-api-key: API_KEY" \
-H "Content-Type: application/json" \
-d '{"url":"http://example.com","buildTitle":"Site update"}'
Run tests in bulk
You can trigger multiple tests at once with the analyzeBulk function.
const bulkTests = await debugbear.pages.analyzeBulk([123, 124]);
const results = await bulkTests.waitForResult();
Currently, the bulk test API does not support passing additional options like a commitHash or custom headers. Contact support if you are interested in these features.
The result looks like this:
{
"hasFinished": true,
"results": [
{
"hasFinished": true,
"build": {
"status": "neutral",
"metrics": {
"analysis.date": "2024-05-26T07:56:39.634Z",
"performance.largestContentfulPaint": 1822,
"performance.totalBlockingTime": 190,
"...": "..."
},
"oneLineSummary": "No changes.",
"budgets": []
},
"analysis": {
"commitHash": null,
"commitBranch": null,
"buildTitle": null
},
"page": {
"id": "123",
"name": "MZ Finland",
"url": "https://www.example.com/",
"...": "..."
}
},
"..."
]
}
Export lab test result metrics
The getMetrics function provides the same metrics as the on-page data export button, for example performance metrics, page weight, and Lighthouse scores.
let metrics = await debugbear.pages.getMetrics(pageId, {
from: new Date(2022, 4, 1),
to: new Date(2022, 5, 1),
});
console.log(metrics[0]["performance.score"]);
Using cURL
You can also load page metrics by using the HTTP API directly.
curl https://www.debugbear.com/api/v1/page/PAGE_ID/metrics \
-X GET \
-H "x-api-key: API_KEY" \
-G \
-d from=2022-02-01 \
-d to=2022-03-01
This will load data spanning from midnight on 1 February 2022 to midnight on 1 March 2022.
Get detailed analysis data
You can retrieve detailed information about a specific analysis using the analysis ID.
Get network requests
The getRequests function returns a list of all network requests made during the test.
const requests = await debugbear.analyses.getRequests(analysisId);
console.log(requests);
Using cURL
curl https://www.debugbear.com/api/v1/analysis/ANALYSIS_ID/requests \
-X GET \
-H "x-api-key: API_KEY"
Get Lighthouse report
The getLighthouseReport function returns the complete Lighthouse report in JSON format for the tested page.
const lhr = await debugbear.analyses.getLighthouseReport(analysisId);
console.log(lhr);
Using cURL
curl https://www.debugbear.com/api/v1/analysis/ANALYSIS_ID/lhr \
-X GET \
-H "x-api-key: API_KEY"