Translation is a speech translation application aimed at mediating communication between travelers and CBP Officers. The app offers support for over one hundred languages which eliminates language barriers and the need to call in translators, often not immediately available, during the vetting process. 

I have omitted and obfuscated confidential information in this case study. All information in this case study is my own and does not necessarily reflect the views of CBP.


To keep our borders secure and our nation safe, CBP must inspect everyone who arrives at a U.S. port of entry. The CBP officers are authorized to ask questions about each traveler’s trip and personal background information. For non-English speakers, this can pose some difficulties. While there are dozens of translation apps available, this new application provides functionality specifically geared towards the needs of the officers.

Our goal for the project was to allow for CBP officers to communicate with non-English speakers entering the country, without the need to call in a translator. The app needs to be useful for both the Primary inspection and Secondary inspection stations in U.S. airports.
Our high level goals were to:

• Make it fast and easy to use for officers and travelers.
• Give officers the freedom to communicate with non-English speakers.
• Create a platform for innovation and to keep our nation safe.

I led the design of the mobile application between April 2019 and today, and collaborated with two developers and a product manager on all aspects of the application.

The app is currently in development and is set to be used at its first pilot location in August 2019.


We traveled to Dulles Airport and sat in on inspections taking place by officers in primary and secondary locations. Our goals were to understand the challenges travelers and officers faced and the workarounds they employed.

In 90% of instances, officers and travelers communicate with a desk in between them. However, in some cases an officer may need to communicate while on the move using their CBP issued smartphone. This meant that the first iteration of our application needed to be for the phone, with the possiblity of creating an accompanying desktop application in the future.

Currently, if a traveler does not speak English, an officer will either call an officer or translator who knows the language that is on-site to come translate, or a translator must be called on the phone. In either instance, wait times for travelers increase. For more common languages, translators are available on-site, but may have to travel across terminals in order to help. For less common languages, officers must call a translation center, which also takes up time and costs money.

The officers on site described some of the problems they had with human translators. The biggest problem after cost and efficiency was the tendency for some translators to allow their gut to determine the integrity of an interviewee, rather than facts. An app would reduce the amount of human error interfering with the interview process.


The biggest challenge I faced throughout this project was balancing moving forward with designs, while keeping two different audiences in mind. An officer working with the application on a daily basis will have much different needs than a traveler communicating with an officer. My access to travelers was limited due to security concerns, but we were able to see the translation software we would be using in action with two travelers. We used Google Translate’s API to conduct our testing to see how the software would function. To supplement our findings, we spent an hour simulating interviews in between officers and team members that could speak more than one language.

Design principles and the content prioritisation framework helped to create visibility into my decision‐making process and galvanise the officers to share in the vision.

DISPLAYING conversations

My earliest design challenge was to propose what functionality to include in our MVP.

I hypothesised that the priority for officers and travelers was to display a full transcript of their conversation as it was occuring. I did not have qualitative data to support this and subsequently asked all six officers I had contact with at Dulles airport to rank the importance of the features we could possibly include in the app. My list of features was gathered from researching five popular translation applications available for mobile, as well as the stories I heard from the officers during our time at the airport.

I was not surprised to hear officers ranked content based on what they felt would lead to more efficient communication, and that they prefered a design that would keep features as easily accessible as possible. The prioritization on accessibility of features, keeping clicks and taps to a minimum, and efficient communication were all preferences I had heard from officers while working on other applications.

The main two features of our application emerged as Voice Conversations and Keyboard/Text Conversations. The results highlighted that we needed to prioritize verbal communication and that the ability to use the keyboard to type was secondary. As a result of these findings, the Voice Conversations screen became our home screen, while the keyboard screen became our second screen.


For each feature phase, I went through cycles of requirements, consensus, approvals, detailed specs and handoffs.

My process involved creating digital wireframes. I was able to use these wireframes to start a conversation with the developers in order to determine the technical feasibility of the designs, and then translated these directly into hi‐fidelity design comps. Since I was working with many existing design patterns, it was relatively easy to move straight into hi‐fidelity designs.

The following are wireframes I created for the Conversation screen. I experimented with different ways to display translations. From the feedback I received from the developers and our product owner, we decided to move forward with two screens (highlighted in blue) which entered the development phase of the application. This surprised me as I expected to only move forward with one of these screens, but I got feedback from the officers that both provided useful functionality.

high fidelity mockups

The final mockups are a product of the results from my research, user testing, and client feedback. I used the OIT Design Guide to guide the visual design.