# HDX API overview

The endpoints are broken out into two main categories: **data read endpoints** for *accessing and using data* and **data write endpoints** for *uploading (creating) and updating data*. See diagram below explaining the different groupings of endpoints by use-case.

<figure><img src="/files/iqXyyokjgtyDUsVnq7mO" alt=""><figcaption></figcaption></figure>

See descriptions, endpoints, and product table when to use each API endpoint below.

## Data Read Endpoints

These endpoints show how to read and access data on HDX.&#x20;

### Metadata Endpoints

The metadata endpoints provide programmatic access to HDX metadata. It allows users to search, filter, and retrieve detailed information about organizations, datasets, and resources. This can be used for users to discover data and data contributors to programmatically automate data management.

See sample endpoints below:

* `package_search`
* `package_show`

See full information [here](https://docs.humdata.org/build/hdx-apis/metadata-endpoints).

### File Download

The file download allows full downloads of data resources on HDX. This can be used to programmatically access data resources but does not allow for large data handling or filtering.

See sample URL below:

* `https://data.humdata.org/dataset/<DATASET_ID>/resource/<RESOURCE_ID>/download/`

See information [here](https://docs.humdata.org/build/hdx-apis/file-download-endpoint).

### Tabular Data Endpoints

The Tabular Data Endpoints enable direct access to structured, tabular data resources on HDX. It supports both native and SQL queries, allowing users to filter, aggregate, and query data without downloading entire files. This is particularly useful for creating data pipelines, dashboards, and real-time data analysis.

See sample endpoints below:

* `datastore_search`
* `datastore_search_sql`
* `datastore_info`

See full documentation [here](https://docs.humdata.org/build/hdx-apis/tabular-data-endpoints).

## Data Write Endpoints

These endpoints show how to write through publishing and updating data on HDX.

All data contributor endpoints require an HDX API token. Tokens can be managed under your user profile. There are additional data write endpoints which are further documented in the CKAN documentation.

### Create Data Endpoints

The create endpoints allow contributors to programmatically create new datasets and data resources. They are used when onboarding new data or establishing an automated upload process for a pipeline. Files are uploaded directly to HDX, and associated metadata is registered under the dataset within an organization.

See full documentation [here](https://hdx-python-api.readthedocs.io/en/latest/) in the HDX Python API Library.

### Update Data Endpoints

The update endpoints allow contributors to modify existing datasets or resources on HDX. They are used for routine data refreshes and metadata corrections through data resource replacement in HDX and do not require the entire metadata schema dictionary.

See full documentation [here](https://hdx-python-api.readthedocs.io/en/latest/) in the HDX Python API Library.

## Choosing the right API endpoint

Each HDX API endpoint is designed for a specific type of task. Use the guidance below to quickly identify which API best fits your workflow.

|             **If you want to…**             |            **Use this Endpoint**           |                                                  **Why**                                                  |
| :-----------------------------------------: | :----------------------------------------: | :-------------------------------------------------------------------------------------------------------: |
|        **Search or explore datasets**       |      `package_search`, `package_show`      | Ideal for search, filtering, and metadata retrieval. Provides dataset and organization level information. |
|           **Download data files**           |                  File URL                  |                Use the resource download URL returned in the metadata to fetch full files.                |
|            **Query data schemas**           |              `datastore_info`              |                                    Returns column names and data types.                                   |
|   **Query data tables and large datasets**  | `datastore_search`, `datastore_search_sql` |     Enables querying or filtering without downloading the whole file like in dashboards or pipelines.     |
|   **Run complex filters or aggregations**   |           `datastore_search_sql`           |          Use SQL syntax for flexible queries like aggregations, joins, or conditional filtering.          |
|       **Access spatial geo datasets**       |                  File URL                  |                     Returns full GeoJSON or vector data for mapping or GIS workflows.                     |
|      **Programmatically write to HDX**      |               HDX Python API               |                               Allows authenticated users to create datasets.                              |
| **Automate metadata or dataset management** |               HDX Python API               |                 Allows authenticated users to update existing datasets and data resources.                |

### Getting started tips

If you’re unsure which HDX API endpoint to start with:

* Start with **metadata endpoints** for discovery or metadata.
* If you want to access data (tabular and geodata), use the **file URL** to access the entire data resource.
* Move to **Tabular Data Endpoints** if you need to query inside the data resource or if you are working with large files.
* **\[Data Contributors]** Use **data write endpoints or the HDX Python API library** to programmatically create and update datasets within your organization.

## Additional programmatic services

HDX is built on CKAN, an open-source data management platform. While this page highlights the most relevant endpoints for HDX users, additional CKAN endpoints are available in the broader open-source [documentation](https://docs.ckan.org/en/2.9/api/).

HDX and the open-source community also maintain tools that extend these APIs, making integration and automation easier for contributors and developers.

* [**HDX Python Wrapper**](https://github.com/OCHA-DAP/hdx-python-api?tab=readme-ov-file)**:** A HDX supported Python library that wraps the metadata endpoints, making it easier to read, update, and publish datasets from scripts or pipelines specifically for HDX. This is our recommendation to push and edit data to HDX.
* [**HDX CLI Wrapper**](https://github.com/OCHA-DAP/hdx-cli-toolkit)**:** Built on the Python API as an open-source HDX contribution enabling basic operations (search, upload, download) directly from the terminal (Command Line Interface).
* [**HDX Metadata Endpoints Notebook:**](https://github.com/OCHA-DAP/hdx-metadata-endpoints-notebook) Example Python notebook showing how to call different endpoints and exploring HDX data.
* [**HDX Tabular Data Endpoints Notebook**](https://github.com/OCHA-DAP/hdx-datastore-api-python-quickstart)**:** Example Python notebook demonstrating API workflows, which may evolve into a lightweight Python package for analysis.

## Feedback and support

Our team reviews feedback regularly to improve the APIs and support the humanitarian data community.&#x20;

If you encounter issues, have suggestions for improvement, or need guidance using the APIs, please email us at <hdx@un.org>.

<br>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.humdata.org/build/hdx-apis/hdx-api-overview.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
