Parro is an automated tech interviewing platform. As the market for tech talent continues to grow, tech companies face one problem: large volumes of software applicants. From startups to large multinational corporations, reviewing and interviewing every applicant is costly. Is there an efficient way to screen applicants that not only maintains the quality and integrity of the screening, but is also cost and time efficient? 

Target Users

Our initial target users are software companies screening large volumes of software engineering talent. However, we hope that midsize and small companies that are also tight on time and resources could use Parro to streamline their hiring processes. 

My Role

My role on the team was primarily focused on the UX/UI design and frontend development, while my other teammates focused on machine learning and the backend.


Prior to building the product, we laid out two core principles that would guide our design:

  • Seamless process: The product was made to emulate an interview. Interviews are typically guided by the interviewer and should require little to no effort on behalf of the user. This principle was executed by triggering actions on the frontend from backend responses. For example, our transitions are triggered by user input and post-analysis feedback from the APIs and server.

  • User comprehension: This experience was made to emulate human interaction at an interview, we accounted for delays to allow the user to comprehend the messages or prepare for a response. We executed this using setTimeouts or setIntervals. Furthermore, we kept the design minimal to minimize distraction and focus the interviewee's attention on the interview.



This product was built in 13 hours at AngelHacks 2017 Silicon Valley. The frontend was built using React/Redux and utilized Bulma, a CSS framework. Video and audio blobs were collected and sent using multer and RTC Video package.


Companies using Parro would get to quickly screen large numbers of applicants through the following features:

Screen Shot 2017-10-26 at 2.47.19 PM.png

Landing: The interviewee will be prompted with a welcoming message to begin the interview. After this screen, they won't be required to trigger the interview progression again, as in, the rest of the interview is audio triggered.

Screen Shot 2017-10-26 at 2.47.38 PM.png

Behavioral Interview: Company hiring reps upload questions via our platform and those questions will be administered to interviewees. Each question has video and audio recording. Video recordings are used to analyze eye contact, check for other people (cheating?), and other movement metrics. Audio recordings are used for producing a transcript of responses, playback per question (good for explaining code), speech clarity, extract a person's most discussed topics, as well as analyze nuance metrics such as confidence and sentiment. 


Screen Shot 2017-10-26 at 3.21.07 PM.png

Q&A: Companies typically are asked similar questions by applicants. Therefore, we allow companies to upload questions to common questions and as the interviewee asks a question, we use NLP to analyze their speech and map it to the appropriate responses. The interaction between the bot's response and the user is facilitated by the user's questions.


Technical Interview: Companies will supply the technical interview prompt. Users are given a text editor that runs code in various languages (i.e. C++, Python, JavaScript, Java, Ruby) and returns a stack trace of the error if the code fails, or the output if the code works. Interviewees can also ask questions, in which we use NLP to trace their questions to answers. When the time runs out, the code runs through various tests and the number of test cases passed is returned to the employer's end.


Screen Shot 2017-10-26 at 2.54.39 PM.png

Analytics: Using facial and audio analytics, Parro intelligently analyzes a candidate's speech capabilities and extracts valuable insights on their content.


Looking Forward

This product was developed in less than 15 hours. Looking forward, we hope to move this from demo version and fully integrate our speech and video analysis. Currently, those features are developed but are not linked to our frontend. Though our target user segment features larger companies which screen through large volumes of applications, we hope to pilot this with smaller companies and work hands on with ironing out the kinks.


For an in depth technical analysis on the project, please visit the Github repo here.