Introduction

Every year, the Office of Undergraduate Education at UC Davis organizes a big conference for undergraduate student researchers to present their work. This year is the 32nd edition of it! For the first time (and for obvious reasons), the event is held virtually.

This morning, I watched several presentations and took some notes, which you can find below. I was obviously looking forward to seeing the presentations from the students whom I’ve been mentoring, but I also came across names of students I recognized from CS classes they took with me and watched their presentations as well!

Overall the presentations were very interesting, and I find myself being really impressed by students here. During my education in France, I only started research at the end of my master’s program, working on a mandatory 6-month long research project that counted for graduation. But my work was certainly not publishable (at least I don’t think so!), as the goal was more geared towards “training” than really “doing”.

Here, many students get involved with research right from the beginning of their undergrad, and some will get to publish in top conferences before they graduate. But most importantly, students show a level of enthusiasm, working on these projects, that is great to witness!

Presentations from students I’ve mentored

Hiroya Gojo - “LupSeat - A Smart Seat Assignment Generator”

Hiroya presented his terrific work on LupSeat, a tool that is meant to help instructors randomly assign seats to students for exams. The project was introduced last week in this article. You can find Hiroya’s poster here: Poster LupSeat URC.

Arjun Kahlon - “Lupgist: An interactive commenting system for code gists”

Arjun presented his work on LupGist, a project that aims to develop a lightweight embeddable commenting system that allows visitors to directly comment on the code displayed in a code gist. The project is still heavily in development but Arjun did a great job explaining what our plan is! You can find Arjun’s poster here: Poster LupGist URC.

Zesheng Xing - “An Automatic GitHub Takedown Tool for Educational Purposes”

In this presentation, Zesheng talked about his work on our Takedown tool. One of the problems instructors often face is that many students unfortunately upload their programming assignments onto Github even when syllabi courteously ask them not to. The goal of Zesheng’s project was therefore to provide an easy and automated way to track litigious repositories and streamline the process of taking them down. Not surprisingly, many people left comments showing interest! You can find Zesheng’s slides here: Slides LupTakedown URC.

Madison Brigham - “Gender Differences in Class Participation in Core Computer Science Classes”

Madison presented her work, analyzing 10 quarters worth of class participation data for ECS 36C (our data structure course at UC Davis) and ECS 150 (our OS course), and studying whether there were differences between male and female students when it came to class participation.

Madison’s work is soon to be published at ITiCSE’21, and was presented earlier this month in this article. You can find Madison’s slides here: Slides Class Participation URC.

Presentations from other CS students

Hannah Brown - ‘“It’s my normal”: A Linguistic Analysis of In and Outgroup Perceptions of PTSD in Young Adults’

Hannah presented a pilot study that they started as a linguistic class’ project and continued afterwards. They interviewed 6 participants, half of which suffer from PTSD, and asked them the same set of 8 questions. Their goal was to determine if the ingroup (participants with PTSD) and the outgroup (participants without PTSD) had the same perceptions of what PTSD is and how it can be healed.

Hannah found two notable differences between the groups. The first difference was in the definition of “normalcy”. While the ingroup described themselves as abnormal in situations triggered by their PTSD, the outgroup contrasted having PTSD and being normal. The second interesting difference was in the description of the symptoms. While the ingroup described their internal struggles related to PTSD with specific examples and stories, the outgroup focused on difficulties related to perceptions others can have towards someone who has PTSD.

I thought it was a great talk and I particularly appreciate the insight Hannah provided about the re-contextualization of normalcy. After all, when going through from a traumatic event, having PTSD effects should certainly be considered “normal”…

Trevor Carpenter - “2020 U.S. Twitter Analysis: A Knowledge Extraction of Events and Public Influence”

Trevor’s work aimed to study the social interactions on Twitter during large events. The first issue to doing that is to “understand” the tweets on a massive scale. For that, Trevor used word2vec, a well-known word vectorization tool, and a neural network. His goal was to be able to classify tweets into 3 categories: positive, neutral, and negative.

Through this analysis, Trevor found some interesting results. For example, that shorter tweets using strong language tend to be more negative, whereas longer tweet with more neutral language tend to be more positive. Tweets also tend to cluster: negative tweets will attract other negative tweets, and conversely, positive tweets will attract other positive tweets.

Trevor performed another more detailed analysis of tweets, surrounding the Presidential briefing from March 13th 2020 –the day a National Emergency was declared because of the coronavirus. He noticed a few interesting things. For example, most tweets were neutral, positive and negative tweets had the same distribution, and when the number of tweets related to strong sentiments increases then the number of neutral tweets drops.

Overall, very interesting talk, analyzing some real-life concerns via machine learning!

Daniel Ritchie - “Design and Implementation of a Video Game to Teach Programming to Non-Technology Majors”

Daniel presented his work related to teaching programming to non-CS students. He said that in most non-CS courses, the focus is usually on data analysis. However, research shows that coding is still perceived as intimidating and challenging. Teaching programming via video games has already proven to be an effective approach, but has usually been reserved to CS intro courses.

Daniel’s project is therefore to develop a video game that specifically targets non-CS students and that fulfills all the following criteria:

  • Active learning: students have to code.
  • Flow: the objectives of learning are not a side-effect of the game, they are integrated into the storyline.
  • Learning objectives: the coding exercises have an appropriate increase of difficulty.
  • Game design: the game is fun to play!
  • Accessibility: the game runs in the browser, and is free.

In the final part of the presentation, Daniel showed a demo of the game’s first level. It looks great already, can’t wait to see the future levels!

Matthew Sotoudeh - “Ensuring the Safety of Deep Neural Network-Based Artificial Intelligence Applications”

Matthew presented his work on how debug and repair machine learning (ML), which has become a pressing concern considering the increasing importance of ML in our society.

If a software written by humans has a bug, it’s usually a matter of changing a few lines of code in order to fix. With ML, it’s very different. Models are autonomous because they determine their internal rules on their own, by training on a very large amount of input. The problem is that if they produce an incorrect result, then we don’t know why and as it’s almost impossible to precisely figure out which internal rules failed. Matthew’s work addressed 3 aspects of this problem.

The first project that he showed was about making ML systems explain their decision process. For example, if an ML system was able to recognize a fireboat in a photo, then it was also to point out which exact part of the photo contributed to recognizing the boat and which other part of the photo contributed to recognizing the “fire” attribute of the boat. His second project was about ML verification. By being able to take a big ML system and reduce it to a smaller version, it gets easier to verify it. His third and final project was about ML repair. If an ML system incorrectly recognizes a photo, then his tool can identify the smallest change to fix the ML system and fix the problem.

Matthew is an impressive student, as he has been able to publish 5 papers during his undergrad so far, some in the best conferences of the ML field! Congrats to him and good luck for grad school :)