When we launched the Humanitarian Data Exchange (HDX) one year ago at the Open Knowledge Festival in Berlin, our goal was to make humanitarian data easy to find and use for analysis. A year later, we’ve made a huge amount of progress towards that goal, but as the United Nations is fond of saying “more work remains.”
Our first-year focus was getting data into HDX — without useful data, we knew that no number of features and or amount of design would make the site worthwhile. We began by working with partners in two locations, Colombia and Mali. We established a Data Lab in Nairobi to help uncover relevant data in East Africa. We developed crisis pages for the Ebola Outbreak in West Africa and for the Nepal Earthquake. We also created branded organization pages, initially to showcase the World Food Programme’s food data but subsequently for many more partners.
A year of stats
We track a number of statistics to help understand user behavior and to ensure we are having an impact. Below is a close look at few key indicators:
A look at what’s to come
While we are encouraged by the numbers above, we want to do more for our existing users, and reach out to more people who can benefit from humanitarian data and data services. Here is what we have planned for the year ahead:
- We are releasing a new feature for previewing geospatial data onto a map for immediate insight. Take a look at an example here.
- We are migrating the operational datasets stored in OCHA’s Humanitarian Response site to HDX to make it easier to find humanitarian data.
- We will be improving search, and making it easier to find and filter data that is being actively maintained.
- All organizations in HDX will be able to showcase their data through branded pages that include their own logos and color schemes.
- We will add functionality for HXL-tagged data to be processed and visualized through HDX.
Our team is motivated to take on these new challenges. We are grateful to the many users who have provided feedback and helped us stay user-driven, and we owe a special thanks to Aidan McGuire at Scraperwiki and Rufus Pollock at Open Knowledge for keeping us on solid technical footing.