Project metadata
Project title
Project tagline Fulfilling NEPA’s promise through the power of data science
Project purpose Using AI to make 50 years of Environmental Impact data and related decisions generated by NEPA (National Environmental Policy Act of 1969) easily findable, in one place, usable by anyone.
Project summary NEPA has generated over 40,000 Environmental Impact Statements and possibly millions of pages of valuable science for infrastructure projects. But, that information has been scattered and buried among multiple websites, making them difficult or impossible to access. NEPA access is a multidisciplinary team using new data science that trains machines to read and organize this data, creating knowledge we can use to to make better decisions on projects with environmental and social impacts.
Client/company University of Arizona, Udall Center for Studies in Public Policy
Time frame October 2019 to January 2023
My major responsibilities Usability testing and research, UI design, visual design, storyboarding
Platforms REACT JS(developers) blended with WordPress (UX designer)
Tools used pencil,, Balsamiq, Figma, Photoshop, WordPress. Saola Animate
Key performance metrics Search success rate. Wide adoption by 6 persona groups: government agencies, academic researchers, environmental consultants, lawyers, NGOs, schools, and engaged citizens
Collaborators An interdisciplinary team of 20 members: computer scientists, developers, environmental and social scientists, public policy experts, lawyers, and Natural Resource students
Link to final project

Project summary is a search and research platform that allows anyone to find and use over 50 years of valuable environmental data produced under NEPA, the National Environmental Policy Act of 1969. Lacking the technology in 1969 to collect and organize this information, these documents became essentially lost because they are difficult or impossible to find using common web search engines. They existed, but were inaccessible–scattered and buried layers deep in multiple bureaucratic silos, libraries, and databases, and using inconsistent media and formats.

Now, machine learning and natural language processing can find, collect, and read these thousands of government documents that each may contain hundreds or thousands of pages of valuable technical data. As this lost data is organized, it becomes information, then knowledge–useful to both government agencies and engaged citizens for making better decisions.

In the past, policy makers and researchers had to start each project from scratch, without easy access to what others have already done. This project would enable people to learn from history, and to share knowledge of economic, environmental, and social impacts of their projects.

Besides saving countless hours of mental labor, collecting information in one place creates a synergy that supports creative thinking and new actions. If something is so hard to find, it may as well not exist. Imagine a community concerned about how a proposed dam would affect their downstream fishing. They had no way of knowing that a similar project was built on the other side of country. Thus they could not draw on the fish counts conducted during and after the project. Nor could they benefit from the other project’s creative plan modifications designed to accommodate aquatic life.

My role was UX researcher and UI designer. I collaborated with ​a developer, and a team of data scientists, environmental and social scientists, public policy experts, lawyers, and students.

Discovery phase

Defining the problem

As I understood it, the problem this site solved was to: use new data science tools to find and organize millions of pages of technical documents, lost in complex government structure, so they can be used to make batter environmental policy decisions.

Wicked problems are challenges that span jurisdictions and disciplines; have multiple causes, and undergo rapid change with high uncertainty. NEPA is a governance tool that has generated the data that can help solve these problems, yet, there is no central database for this information and it is too voluminous to access manually.

–From the NEPAccess National Science Foundation proposal

Stakeholder interviews

I began by listening to project stakeholders. I learn how problems resulted from the difficulty of finding original NEPA documents.

  • Easy public participation: “Being available and easily available are two different things. That’s what NEPAccess will do. The public, which means everybody, has the right to understand what’s going on in each process and have a say.”
  • Pain points for agencies and consultants: “If there’s a pain point is it likely to be about finding similar documents or just the fact that the documents are so huge that you don’t know exactly what’s going on in. It can be really challenging to navigate them and get to the information that you need. NEPA users may not even know that they have those pain points, because people can’t really imagine it being any different.”
  • The biggest challenge: “Right now the biggest challenge is figuring out how people will want to use it at this point in time. Once we have that, then the biggest challenge will be: can we do what people want?”

How people find NEPA documents now

I knew if I designed for a common citizen, then it would work better for an agency chairperson or law partner as well. To understand the current state of NEPA document find-ability from a human point of view, I made some searches, using Google, for projects in my area.

I was optimistic. How hard could this be? I found the website hosted by the government agency tasked with archiving environmental impact statements. I decided to look for the EIS for a controversial copper mine that has been in the local news for the past decade.

PAUL: typed Rosemont mine into the search box because I had heard that phrase on the news. 
System: No records met the search criteria

How can that be? I know the document exists and NEPA requires it to be made public.

I clicked a link to a library site that had a large collection of EISs. I got the same (lack of) results. Did I do something wrong? I felt confused and anxious. Surely a famous mining project like this would at least return a catalog record.

I wrote to the reference librarian in the sidebar. She wrote back, explaining that the document was on CD ROM. She did give me the full title to see if that helped in my search: Final environmental impact statement for the Rosemont copper project: a proposed mining operation, Coronado National Forest Pima County, Arizona.” Most EIS titles are long like that.

PAUL: now armed with the title, pasted it into the search box labeled “title.”
System: No records met the search criteria.

PAUL: tried two words that were in the title: Rosemont mining
System: No records met the search criteria.

PAUL: tried Rosemont copper.


But, why did Rosemont mining return no results while Rosemont copper was successful? All those words occurred within the title.

Apparently, the two search words had to be a consecutive phrase in the title, not just somewhere within the title, or the document would not be found. I downloaded both the draft and final EIS.

Now I understood, from a user’s perspective, what we were up against. Being available and easily available were too different things.
The problem: Because these documents are so complex–find out how people use them, what questions they want to answer, and how to simplify the UI so they can think about their research questions–not how the search system works.


How people search

Search is difficult to design. A search user interface (UI) must be simple. It has to translate computer logic into elements that are either intuitive to humans or easily learned using plain language. Because people are so used to Google searches and take that speed and invisible power for granted, we needed a high success rate. If people could not find relevant search results or got no results, the site loses credibility

I set up a series of usability studies to learn how our audience searched and used NEPA documents. These were my early research questions:

      1. What are the users’ Jobs to be Done? (Gather goals and context)
      2. How do they currently do this? (Analyze workflow)
      3. What could be better about how they currently do this? (Find opportunities)
      4. Does their level of NEPA domain knowledge affect the usability of a search interface?
      5. Understand common search psychology.

The Evolution of a user interface

The developer first built the user interface (UI) to express the system in raw form. So, I started with what interaction designer, Alan Cooper, called an “implementation model.”  It literally reflects how the technology works. These original screens are testing to see if the system works as expected.

However people using a product have goals of their own and don’t need  or want to know how the system works.Good UX and UI design will protect humans from having to deal with this complexity. My goal was to find the “mental models” that users had–how they imagine the system works. People interact with a system based on these mental images, which may not be anything like how the system works in the developer’s logic. This is the first point of frustration–translate the system model to the various human mental models.

In order to adapt to the developer’s workflow, I held usability tests concurrently with the evolution of the interface. This way the user input was fresh and went directly into the design. For each stage, I ran the developer’s screens through usability tests and came back with user-based edits using sketches, wireframes, mockups, or simply an email. We worked through problems in weekly developer meetings.

The first screen

This is the first screen I received from the team manager when I joined the project in its second year:

Original developers UI
Original developer’s UI, 2019 with a search for “Rosemont”

The UI featured a search box, a few metadata filters, and search results in a table format. When this was first demonstrated, the team cheered–because it worked.  As you can see above, the single search term Rosemont brings up both the draft and final Environmental Impact Statement (EIS) for that particular copper mine, downloadable as PDF.

A whiter shade 

The first thing stakeholders asked me to do was to change the black background. I also made a few other changes:

  1. Separated the elements into a hierarchy with meaningful H1 and H2 headings
  2. Made the search box the most prominent visual element
  3. Placed the advanced search filters into their own section
  4. Watched as the developer began to add boolean search forms and more and more tooltips to explain them.

First iteration search screen

It’s well documented that most users are confused by advanced search, yet team members thought users would need it for such complex content. I suggested we simplify the search box and make the advanced search an option: clicking the checkbox unwrapped the additional functions.

Make it open and close

Clicking a checkbox revealed the advanced search features.

Too many kinds of search

The looks on peoples’ faces express what I learned from this user test. Combining advanced search options with the search box is confusing.

 1 inserts AND between each search term 2  inserts OR.  3 is for words in exact order.  4 searches within all document text.  5 excludes words.  searches metadata.

The interface was literally showing all the different kinds of search the system could do with the database. At one point there was advanced search, simple search, default search, title-only search, metadata search, and full-text search. People were confused by these choices. They were used to using Google on other sites. “I just want to find what I’m looking for.”

Recommendation: Design a minimal search interface. Search is a mentally intensive task. It is difficult for a user to think about their sequence research questions and, at the same time, think about how the database works.

A unified search box

The so-called "Combined search"
The so-called “Combined search” and the table format for search results.

The data scientists were able to combine the various search types into a single simple search box. People didn’t need to have those options–they were used to simple Google-style search. We incorporated everything we could into a simple search box, and used keyboard modifier keys for advanced search (for example ” ” indicates an exact phrase). We linked to a page of “search tips.” 

The title similarity slider

No users understood what this slider element did, but it was fun to play with it. Yet, the function behind it was important to them and the failure of this gizmo lead to future success–see below.

NEPA is a process, made up of several stages: Draft, Final, Supplements, Record of Decision, etc. There was as yet no way to group these scattered documents once you found them, except by title similarity. Developers created the Title Similarity Slider to help find the other related documents by measuring a percentage of similar words in other titles. No users understood, “a lower match percentage yields more results.” That’s how the system measured similar documents, but not how people though of it–they wanted to see the related documents grouped together.

This tool’s failure inspired developers to design an algorithm that did this under the hood. But first we replaced the search results table with cards or “tiles,” something people intuitively understood.

Tables and cards

The table or spreadsheet format of the search results was familiar to the academic scientists on the team, and easy for the system to output. But the table had both layout and usability issues. I drew up ideas for a card-based layout: pencil sketch, balsamiq wireframe, then Figma mockup.

The evolution of a filter and card interface from sketchbook, to Balsamiq mockup, to a Figma prototype

The mockups above show a design that transformed the metadata search into a left sidebar set of check box filters called “Narrow your search.” Also called “faceted search.”

Combining related documents in a card

Anyone using NEPA data wants easy access to the related files within the same process. After We created the card-based search results, we were able to group the cards that went together, within the same NEPA process. People didn’t need to see how the computer made this happen. By hiding the system logic under a familiar pattern, the cards worked.

In the screen above, Draft, Final, and Record of Decision cards are grouped–people didn’t need to search for each file separately.


From system models to mental models

This project was a unique opportunity to work on an inovative product from the ground up. A tool that would benefit the larger world in a pragmatic way–making science available to a complex legal process, to make it more efficient, less costly, and potentially make better social and environmental policy decisions.

When I came on, I started from scratch with the user interface–the developer’s first screen. As developers invented new features, I tested each iteration with users using usability walk-through techniques, a long incremental process that paid off. The users I interviewed became part of a growing community that challenged and supported us, suggested new ideas, eventually spreading the word to their colleagues.

The yard-sale effect

The most challenging part of this project was explaining to funders and administrators why we built this and how making lost and scattered documents available is a game changer for decision-making. Yet, during user interviews, people immediately grasped the usefulness of the system over the hacks and tedious work-arounds they were used to. As in a yard sale, laying a complex set of items out where it’s easy to see everything allows a synergy to occur where new knowledge is created.

Cognitive load

People don’t want or need to think about how the system works–they have research questions to answer. By simplifying a user’s critical path through an interface, and knowing what they need, a designer frees up a user’s mental energy for additional creative thinking about the problem they want to solve.

Tech savvy?

The question ” How tech-savvy are our users?” comes up in team meetings from developers. Often domain knowledge is lumped with technical knowledge. To me, this assumption is not as useful as understanding basic human psychology. I found that making software simple and usable for everyone makes it work better for highly experienced professionals as well. We all share a common set of human sense-making capacities.

I listened to a legal research Professor share her students’ universal difficulty understanding a basic advanced search interface. A top law firm partner had difficulty with keyboard search modifiers simply because the specialized databases he was used to used different keystrokes. I tried to accommodate all of these insights.


Usability testing has a subtle magic. Observing people apply a realistic task on a system generates insights that melts through our assumptions opinions. User behavior is often surprising. While watching people move through a scenario, I often think to myself, “I never would have thought of that until I saw them do it.” Human behavior forms the “most likely truth” that guides design choices and even generates new directions.


Participants. I interviewed around 30 people from 5 different personas or user groups.

Conversions. A year and a half after the site went public, and still in beta testing phase, we had 429 registered users, logging 1,158 searches. Downloading is considered a “conversion” and people downloaded 2,129 environmental reviews.

Search success metrics. This is a next step, interrupted by a funding pause.

A testimonial from a team member

Laura shared with me your findings based on the three interviews conducted thus far. Excellent overview and analysis of priority fixes! … First, we learned SO much from these interviews. Thank you so much for organizing them and thinking through the resulting changes that need to be made. I also wanted to support the recommendation that the programmers work on some of these high priority fixes before we schedule additional interviews in September… With some of these basic items fixed now, we can then learn more about how users are likely to dig more deeply into more complex searches.

–Kirk Emerson is Professor of Practice in Collaborative Governance at the University of Arizona School of Government and Public Policy


This process was a collaboration within a multidisciplinary team, but largely between me and the lead developer. In that spirit of friendship between differing minds, I left some of his work that I didn’t 100% agree with in the interface. It did not do any harm. My favorite notification of his was
Proximity dropdown is disabled when certain special characters are used: ~ ? ” *
(See title image at top of page)

Leave a Reply

Your email address will not be published. Required fields are marked *