Asking for Help — Life and Tech

Aveline Wingfield
7 min readJan 7, 2021

--

Asylum Seekers deserve to be represented and their voices deserve to be heard. I lived in Berlin during the refugee crisis. I sat in the waiting room of the Foreigner’s Office, holding my ticket, just one among hundreds. I walked past the tent, where so many stayed when they had no place to go. I helped refugees find their way through the hallways to their waiting rooms and navigate the labyrinthian subway system. I witnessed firsthand the joy and sorrow a simple answer can bring.

The courage that it takes to leave your home, culture, and language behind and venture forward into the unknown is indescribable. No refugee that I talked to had the same story. The one thing that the people I spoke with had in common was that they were asking for help. They had gone through unspeakable things, and had arrived at their destination. The final step was dealing with a complex legal system, in a foreign language, in a foreign country, with sometimes very hostile gatekeepers.

They are asking for help.

Human Rights First is a non-partisan advocacy group, originally known as Lawyers Committee for International Human Rights, that was founded in 1978. The organization matches asylum-seekers who could not otherwise afford representation with a good pro-bono lawyer. Over the past forty years, they have built a bipartisan coalition of activists and lawyers to lead by example, and challenge America to live up to its promise.

Representing these asylum seekers is a tough job. The lawyers are performing this service pro-bono, not for payment, but out of the innate desire to help another human being. There are so many more refugees than there are lawyers, and they have a lot on their plate. They needed a tool to that would help them to make their most persuasive case to the judge.

We were asked to build a service that would take the data from immigration cases that have already been ruled on, analyze that data, and serve it back to the organization in a way that would be helpful to the team. My task, as the backend developer, was to connect the beautiful front end that my team would build, to all of that very technical incoming data that our data-scientists would come up with.

The tool that we created helps the representatives frame their story in the most effective way possible. We are able to see which keywords have previously had a positive and negative effect on the the judge who will be overseeing their case. The judge doesn’t have much time to make their decision, so it is important that the representatives can make the best possible case for their client.

My greatest concern going into this project was that whatever I made would not be enough.

The product needed to have very specific functionality. Lawyers needed to be able to view and download the original case pdfs, download csv files of both case and judge data, and they also wanted to be able to upload their own files into the database.

The biggest issue, as I saw it, was all of the interaction involving files and folders. Anytime I sent a file for download or viewing, I had to write the file locally. This was not only inefficient on memory usage but it was also not secure. Since the data that we are dealing with is sensitive, we didn’t want to store it outside of the AWS datastore or store anything in a way that would be easily accessible to an unintended user.

The first challenge was to serve the files. Our data is stored in a SQL database, which means we were working in JSON. JSON is easy enough to convert into CSV data and serve to the front end, using JSON2CSV.

Convert JSON to CSV using JSON2CSV

However, the data on the judge would require multiple CSVs. Sending multiple files, in this instance, was best accomplished in a zip file. This was the first of many times during the month that I would need assistance. All of my CSVs could send individually, but I couldn’t get the zip file to send.

After seeking advice, I realized that I was sending the data incorrectly.

zip files using JSZip

It was during this conversation that we also talked about another problem that I had noticed. In order to send the file as a download, I first needed to write it to memory. Over time, this would lead to far too many files being created and stored in memory. My advisor pointed out that I could probably make a caching system to handle that.

Having never made a caching system, and only having recently been made familiar with hash tables, that prospect was too daunting. I reached out again, to another developer to get advice on the pitfalls to avoid, and things I should include. Again, my fear that the system that I would build would be neither fast nor secure enough.

This developer recommended, instead, that I use a pre-built system and gave me his personal recommendations. Based on the needs of the project, I selected cacache.

npm install cacache

Cacache would allow me to safely store my client’s search results as data, but also had a built in temporary file management system. Once I wrapped my functions inside of the temporary directory, the file immediately disappears from memory after it has been sent and I can save the data in the cache.

Created a Tempfile to send Zip Files

I could see an immediate improvement in the time performance of the application. It was much faster to send and receive the data, the call time for that file construction was massively improved. Encouraged, I made an immediate middleware to cache as much of the search results as possible.

Another benefit of the caching system is that it stores data in the most efficient way possible. Which meant that I first needed to convert my data from JSON to a string. After trial, many errors, and a lot of googling, I realized why my storage wasn’t working the way that I thought it was.

The String method converts things to their string representation, but the stringify method converts JSON data to a proper string representation. I also installed another module to handle converting CSV data to a useable string for the purposes of storage.

Once the data has been stored, it must be again extracted. Extracted data must be parsed back into CSV or JSON data before being send back to the user.

Parse Result back into JSON Data

There, I discovered another workflow mishap. After implementing my middleware, I made some changes to my model. I was shocked to find that my changes weren’t reflected in the response data.

Pro tip: Make sure the call works as intended before saving and retrieving it from memory.

The final step I completed was to automate the database to make the call to the primary endpoint. Using ChronJob, I set the primary call to take place thirty minutes after the database updates. This will cache all of the data and make the overall performance and user experience smoother.

At the end of the month, the whole team has put a lot of work into the project. The front end is very user friendly. We were able to implement a bookmarking system for the lawyers, so that they are able to save cases and judges as they do research for their clients. They can upload files and forms directly into the database, and they can retrieve other files stored in the database.

In the end, I am proud of the work that I was able to do in three short weeks. Starting from nothing, and with help and consultation, I was able to accomplish a lot.

  1. Downloadable csv data for cases
  2. Downloadable csv data for judges
  3. Downloadable PDF case for cases
  4. PDF Render for original case files
  5. Endpoints for Cases, Judges, Keywords, Social Groups, & Protected Grounds
  6. Caching System that improves response time
  7. Temp File System that saves memory
  8. ChronJob that automatically updates the database
  9. Filter that will automatically populate the database
  10. ChronJob that makes the first data-call to the backend to cache the data

There are further tests left to write, and further documentation that needs to be added to the database. In the future, when the data science team is able to further refine the data, the data filter will likewise need to be refined.

I am certain that the calls to the database can be further optimized. There is always more work to be done and more efficiency to find.

The incoming team will have to spend some time reading the documentation for some of the key dependencies that I used. The challenge is the ability to grow and modify the tables as more data becomes available.

My decision to migrate into the technical field was in no small part inspired by my desire to make a positive difference in the world. The project that I helped to create for Human Rights First is hopefully the first of many projects that I will be a part of that will have a real and positive impact on the world around me.

Everyone needs a hand at some time or another. I would not have been able to complete this project without the help of my friends and colleagues. Hopefully, this tool that we designed with allow the lawyers and advocates at Human Rights First to better help their clients.

No person is an island, we would all do well to remember that.

--

--

No responses yet