Export Productivity and Efficiency Metrics Using the Harness SEI API
Overview
This page explains how to export team productivity and efficiency metrics, configure request parameters, and interpret the CSV output.
The SEI Insights Export API enables you to export team-level reports and team-level (at the individual level) metrics by adjusting the request body. The CSV output contains one row per team or contributor (depending on the request) and one column per metric, making it easy to analyze and compare performance across teams, developers, and time periods.
With this API, you can:
- Export Productivity metrics such as PR velocity, coding days, and completed work items per developer.
- Export Efficiency (DORA) metrics such as lead time for changes, deployment frequency, and mean time to restore.
- Include all child teams under a specified team in a single CSV export.
- Export team-level metrics for individual developers.
Export SEI 2.0 reports
Both team-level and individual-level exports use the same endpoint. The behavior depends on the parameters you include in the request body.
Endpoint: POST /v2/insights/teams/reports
Authentication: Requires an x-api-key header with an API key generated from your Harness account. For more information about generating an API key, see Manage API keys.
- Team-Level Reports
- Team-level Metrics (Individual Developers)
Exports aggregate productivity and efficiency metrics per team.
Request Body: ExportRequestDTO (see the Request body structure section below)
Response: A CSV file containing one row per team and one column per metric.
Request body structure
For ExportRequestDTO:
{
"dateStart": "2024-01-01",
"dateEnd": "2024-12-31",
"teamRefId": 123,
"granularity": "weekly",
"productivity": {
"metrics": ["PR_VELOCITY_PER_DEV", "CODING_DAYS_PER_DEV"]
},
"efficiency": {
"metrics": ["LEAD_TIME_FOR_CHANGES", "DEPLOYMENT_FREQUENCY"],
"calculationType": "MEAN"
}
}
Field descriptions
| Field | Type | Required | Description |
|---|---|---|---|
dateStart | Date (yyyy-MM-dd) | Yes | Start date for the report period. |
dateEnd | Date (yyyy-MM-dd) | Yes | End date for the report period. |
teamRefId | Number | No | Team identifier. CSV will include this team and all child teams under it. |
granularity | String | No | Time unit for rate-based metrics (daily, weekly, monthly). Defaults to weekly. |
productivity | ProductivityRequestDto | No | Productivity metrics configuration. |
efficiency | EfficiencyRequestDto | No | Efficiency metrics configuration. |
Granularity
The granularity field specifies the time unit for rate-based metrics, which measure counts over time. It affects how certain metrics are calculated and reported.
Metrics affected by granularity include the following:
CODING_DAYS_PER_DEV: For example, 3 coding days per week.WORKTYPE_COMPLETED_PER_DEV: For example, 1.22 work items completed per week.DEPLOYMENT_FREQUENCY: For example, 20 deployments per week.
Common granularity values include the following:
daily: Metrics calculated per day.weekly(default): Metrics calculated per week.monthly: Metrics calculated per month.
Duration-based metrics like LEAD_TIME_FOR_CHANGES, MEAN_TIME_TO_RESTORE, AVG_TIME_TO_COMPLETE, or TIME_TO_FIRST_COMMENT are always measured in days and are not affected by granularity.
Nested objects
To configure Productivity metrics with ProductivityRequestDto:
{
"metrics": ["PR_VELOCITY_PER_DEV", "CODING_DAYS_PER_DEV"]
}
metrics(required): List of productivity metric names to include in the export.
The following Productivity metrics are available:
| Metric Name | Context | Unit | Description |
|---|---|---|---|
TIME_TO_FIRST_COMMENT | Time to First Comment | Days (CSV contains numeric values without unit labels) | Measures the average time taken for the first comment to be made on a pull request. Helps track review responsiveness and team collaboration speed. Lower values indicate faster initial engagement on PRs. |
PR_VELOCITY_PER_DEV | PR Velocity per Developer | Lines of code per developer per time period (affected by granularity) | Measures the average PR size (lines of code changed) per developer. Indicates developer productivity in terms of code contribution volume. Helps identify development throughput patterns. |
WORKTYPE_COMPLETED_PER_DEV | Work Completed per Developer | Work items per developer per time period (affected by granularity) | Tracks bugs resolved with priority per developer. Measures developer contribution to bug resolution and issue completion. Provides insights into issue resolution capacity. Example: 1.22 means ~1.22 work items completed per week (if granularity is weekly). |
CODING_DAYS_PER_DEV | Coding Days per Developer | Days per time period (affected by granularity) | Counts the number of days a developer actively contributed code. Helps track developer engagement and activity levels. Useful for understanding work patterns and availability. Example: 3 means 3 coding days per week (if granularity is weekly). |
NUMBER_OF_COMMENTS_PER_PR | Number of Comments per PR | Count (dimensionless number) | Average number of review comments per pull request. Indicates code review thoroughness and collaboration intensity. Higher values may suggest more complex changes or detailed review processes. |
AVG_TIME_TO_COMPLETE | Average Time to Complete | Days (CSV contains numeric values without unit labels) | Average time to complete a work item from start to finish. Measures development cycle efficiency. Helps identify bottlenecks in the development process. |
To configure Efficiency metrics with EfficiencyRequestDto:
{
"metrics": ["LEAD_TIME_FOR_CHANGES", "DEPLOYMENT_FREQUENCY"],
"calculationType": "MEAN"
}
metrics(required): List of efficiency metric names to include in the export.calculationType(optional): Aggregation method (defaults toMEAN).
The following Efficiency (DORA) metrics are available:
| Metric Name | Context | Unit | Description |
|---|---|---|---|
LEAD_TIME_FOR_CHANGES | Lead Time for Changes | Days (CSV contains numeric values without unit labels) | Measures the time from code commit to production deployment. Key indicator of delivery speed and process efficiency. Lower lead times indicate faster feature delivery and more agile development. Industry benchmark: Elite performers achieve lead times under 1 day. |
DEPLOYMENT_FREQUENCY | Deployment Frequency | Deployments per time period (affected by granularity) | Tracks how often code is deployed to production. Indicates team's ability to deliver value continuously. Higher frequency suggests better automation and CI/CD maturity. Example: 20 means 20 deployments per week (if granularity is weekly). Industry benchmark: Elite performers deploy multiple times per day. |
CHANGE_FAILURE_RATE | Change Failure Rate | Percentage (0–100, CSV contains numeric values without unit labels) | Percentage of deployments that cause failures in production. Measures quality and stability of releases. Lower rates indicate better testing, quality assurance, and deployment practices. Industry benchmark: Elite performers maintain CFR below 15%. |
MEAN_TIME_TO_RESTORE | Mean Time to Restore (MTTR) | Days (CSV contains numeric values without unit labels) | Time to recover from a production failure or incident (aggregated using calculationType).Indicates team's incident response capability and system resilience. Lower MTTR suggests effective monitoring, alerting, and rollback procedures. Industry benchmark: Elite performers restore service in under 1 hour. |
OVERALL_DORA | Overall DORA Score | Score/Rating (implementation-specific) | Composite metric combining all four DORA metrics. Provides holistic view of team's DevOps performance. Used to classify teams as Elite, High, Medium, or Low performers. Based on Google's DORA State of DevOps research. |
The following calculation types are available:
| Aggregation Type | Description |
|---|---|
MEAN (Average) | Calculates the arithmetic mean of all metric values. Provides a balanced view of typical performance. Best for understanding overall trends. Can be skewed by outliers. |
MEDIAN (50th Percentile) | The middle value when all metrics are sorted. More resistant to outliers than mean. Represents typical performance for half the team. Best for understanding central tendency when data has outliers. |
P90 (90th Percentile) | 90% of metric values fall below this threshold. Highlights performance of top performers. Useful for identifying best-case scenarios. Helps set aspirational targets. |
P95 (95th Percentile) | 95% of metric values fall below this threshold. Focuses on exceptional performance. Useful for capacity planning and SLA definitions. Helps identify peak performance patterns. |
Contributor ratings
If you set "includeRatings": true, the CSV file includes rating columns for applicable metrics. Ratings appear only for metrics where performance tiers exist and reflect developer-specific data, not the team's aggregated value.
For example:
| Collection Name | Lead Time for Changes (mean) | Lead Time for Changes Rating | Deployment Frequency | Deployment Frequency Rating | Mean Time to Restore (mean) | Mean Time to Restore Rating |
|---|---|---|---|---|---|---|
| Parent Team | 15.53 | Medium | 0 | — | 45.97 | Low |
Response format
Success Response (200 OK)
- Content-Type:
text/csv - Content-Disposition:
attachment; filename="<report_name>.csv" - Body: CSV file with report data.
The CSV file structure contains:
- One row per team (including child teams)
- One column per metric
- First row: Column headers with metric names
- Subsequent rows: Data rows with metric values for each team
Exports productivity metrics for individual developers within a team.
Request Body: ContributorExportRequestDTO (see the Request body structure section below)
Response: A CSV file containing one row per developer and one column per metric.
Request body structure
For ContributorExportRequestDTO:
{
"collectionId": "1",
"dateStart": "2025-11-03",
"dateEnd": "2025-11-30",
"granularity": "WEEKLY",
"includeRatings": true,
"productivityContributors": {
"metrics": [
"PR_VELOCITY",
"WORK_TYPE_COMPLETED",
"CODING_DAYS",
"TIME_TO_FIRST_COMMENT",
"NUMBER_OF_COMMENTS_PER_PR",
"AVG_TIME_TO_COMPLETE",
"NO_OF_PRS_WITH_MISSING_TICKETS"
]
}
}
Field descriptions
| Field | Type | Required | Description |
|---|---|---|---|
collectionId | String | Yes | Identifier for the team of developers. |
dateStart | Date (yyyy-MM-dd) | Yes | Start date for the report period. |
dateEnd | Date (yyyy-MM-dd) | Yes | End date for the report period. |
granularity | String | No | Time unit for rate-based metrics. |
includeRatings | Boolean | No | Whether to include developer rating summaries. |
productivityContributors | ContributorMetricsConfig | Yes | Metrics to include in the developer export. |
Granularity
The granularity field specifies the time unit used to normalize rate-based team metrics, which measure counts or activity over time. Granularity determines how these metrics are aggregated and represented in the CSV output.
Team-level metrics affected by granularity include the following:
CODING_DAYS: For example, 3 coding days per week.WORKTYPE_COMPLETED: For example, 1.22 work items completed per week.DEPLOYMENT_FREQUENCY: For example, 20 deployments per week.
Common granularity values include the following:
daily: Metrics calculated per day.weekly(default): Metrics calculated per week.monthly: Metrics calculated per month.
Nested objects
To configure Productivity metrics with ContributorExportRequestDTO:
{
"metrics": [
"PR_VELOCITY",
"WORK_TYPE_COMPLETED",
"CODING_DAYS",
"TIME_TO_FIRST_COMMENT",
"NUMBER_OF_COMMENTS_PER_PR",
"AVG_TIME_TO_COMPLETE",
"NO_OF_PRS_WITH_MISSING_TICKETS"
]
}
metrics(required): List of metric names to include in the export.
The following Productivity metrics are available:
| Metric Name | Context | Unit | Description |
|---|---|---|---|
PR_VELOCITY | PR Velocity | Lines of code per developer per time period (affected by granularity) | Measures average PR size (lines of code changed) per developer. Indicates code contribution throughput over time. Useful for understanding development volume across contributors. |
WORK_TYPE_COMPLETED | Work Items Completed | Count per developer per time period (affected by granularity) | Number of prioritized work items completed by a contributor. Helps assess individual delivery output. Example: 2.1 means ~2.1 items/week if granularity is weekly. |
CODING_DAYS | Coding Days | Days per time period (affected by granularity) | Number of days the contributor actively wrote code. Correlates with availability and engagement. |
TIME_TO_FIRST_COMMENT | Time to First Comment | Days (CSV contains numeric values without units) | Time between PR creation and the first review comment. Lower values reflect faster review responsiveness. |
NUMBER_OF_COMMENTS_PER_PR | Review Intensity | Count | Average number of review comments per PR. Indicates review depth or PR complexity. |
AVG_TIME_TO_COMPLETE | Average Time to Complete | Days | Time from work start to completion. Higher values may indicate delays or larger tasks. |
NO_OF_PRS_WITH_MISSING_TICKETS | PR Hygiene | Count | Number of pull requests missing linked work items. Useful for tracking process adherence and hygiene issues. |
Response format
Success Response (200 OK)
- Content-Type:
text/csv - Content-Disposition:
attachment; filename="<report_name>.csv" - Body: A CSV file with team-level report data.
The CSV file structure contains:
- One row per collection, including: the org tree, each manager, the manager's direct reports, and individual contributors
- One column for the collection name
- One column per metric included in the export (for example,
PR Velocity per Developer)
Usage examples
Example 1: Export Team Productivity Report
curl -X POST "https://api.example.com/v2/insights/teams/reports?format=csv" \
-H "Content-Type: application/json" \
-H "x-api-key: <YOUR_API_KEY>" \
-d '{
"dateStart": "2024-01-01",
"dateEnd": "2024-03-31",
"teamRefId": 456,
"granularity": "weekly",
"productivity": {
"metrics": ["PR_VELOCITY_PER_DEV", "CODING_DAYS_PER_DEV", "NUMBER_OF_COMMENTS_PER_PR"]
}
}' \
--output team_productivity_report.csv
Example 2: Export Team Efficiency Report (DORA Metrics)
curl -X POST "https://api.example.com/v2/insights/teams/reports?format=csv&projectIdentifier=myproject" \
-H "Content-Type: application/json" \
-H "x-api-key: <YOUR_API_KEY>" \
-d '{
"dateStart": "2024-01-01",
"dateEnd": "2024-12-31",
"teamRefId": 123,
"efficiency": {
"metrics": ["LEAD_TIME_FOR_CHANGES", "DEPLOYMENT_FREQUENCY", "MEAN_TIME_TO_RESTORE"],
"calculationType": "MEDIAN"
}
}' \
--output efficiency_report.csv
Example 3: Export All Productivity Metrics (Empty Array)
curl -X POST "https://api.example.com/v2/insights/teams/reports?format=csv" \
-H "Content-Type: application/json" \
-H "x-api-key: <YOUR_API_KEY>" \
-d '{
"dateStart": "2024-01-01",
"dateEnd": "2024-12-31",
"teamRefId": 789,
"productivity": {
"metrics": []
}
}' \
--output all_productivity_metrics.csv
When the metrics array is empty, all available metrics for that category will be exported.
Best practices
- Format Support: Currently, only CSV format is supported. Attempting to use other formats will result in an error.
- Date Format: All date fields must be in
yyyy-MM-ddformat. - Filename Generation: The export service automatically generates appropriate filenames based on the report content.
- Empty Results: Empty or null result sets will return an empty CSV file (0 bytes).
- Child Teams: The CSV automatically includes the specified team and all child teams under it.
- Metric Names: Metric names are case-insensitive (e.g.,
lead_time_for_changesorLEAD_TIME_FOR_CHANGESboth work). - Empty Metrics Array: If the
metricsarray is empty or not provided, all available metrics for that category will be exported. - Default Calculation Type: If
calculationTypeis not specified for Efficiency metrics, it defaults toMEAN.