Email Newsletter

A short update on my life is that I moved out to Berkeley, CA about a month ago. I spent some time getting adjusted, and since then have been focused on exploring new ideas. Exploration can be lonely at times, and I thought blogging could be a good outlet. Don't know exactly what I will write about, but likely related to my startup journey. Please subscribe!

Last Hurrah

How It Started

During senior week last year, I was sufficiently bored waiting for my senior friends to graduate. One of those nights, I happened to run into two friends who had an app idea. It was a senior week hookup app inspired from last chance dances at other schools. They felt that the demand was clearly there and was looking for someone to help build it. I felt intrigued, and since I didn’t have anything much better to do, I said yes.

I didn’t exactly know what I was getting myself into. The app mechanics were simple: each senior could submit up to 10 people they’d like to hookup with, and if interest was mutual, we’d connect them over email. How hard could it be? I expected a night’s worth of engineering.

However, as we started working, I quickly realized that the real challenge was data. Users couldn’t be presented with 10 open text fields. Names can be written differently. There could be duplicates. How would we know which names correspond to which users? We needed a precompiled list of all seniors with their names and emails. Users would select anyone from this list and the system would associate names to users using their email as identifiers.

So we set off to create that list. Unfortunately, it was not easy. We stitched together names and emails from QuakerNet, Penn Directory, and the class Facebook group. After many hours we had something we could work with. We also added a form to the site so that users could report missing names.

The actual website, intended for mobile use, was quite simple: a signup page and a form with ten searchable select fields. It looked like this:

I gasp now at how ugly it was, but it got the job done, so we quickly launched it. Our goal was to have matches go out by senior formal. It was only a few days ahead. 

Marketing was a bit tricky since we wanted to stay anonymous. The thought was that anonymity would lend more credibility, adding allure and mystique to the app. No one wants to submit their crushes to an app their lousy friend built. So we stayed anonymous and indirectly asked friends to forward marketing emails to listservs. We struggled to build credibility until we hit Under The Button with this article:

 Article on Under The Button.

Article on Under The Button.

Then we made it to the class-wide senior week email sent by the class board. It was glorious. We couldn’t have written a better one liner ourselves.

 Email sent by class board.

Email sent by class board.

When all was said and done, we had a few hundred signups and a couple dozen matches. Success stories kept us excited for a few days, we had a fancy dinner to celebrate our hard work, then I erased the database, and it was over.

(Another) Last Hurrah

But.. we did it again this year: Not an incredibly thought through decision. We just thought people would like it again. I won’t bore you with all the implementation details this time. Pretty much the same deal. The main difference was that we completely revamped the front-end:

With more time, more credibility, and more marketing, we reached a lot more people this year.

 Google analytics traffic overview.

Google analytics traffic overview.

Google Analytics reports that 4000 people visited the site. That figure is bloated because mobile and web visits were being double counted as separate users, but even discounting for that I'd say that a big majority of our class at least visited the site. Among those who visited, 800 students signed up and over a hundred matches were made.

A Deep Dive

When I sat down to close up Last Hurrah this time, I couldn’t quite just pack up and move on. I had some questions. Why were there only ~100 matches when 800 people could choose up to 10 people each? What was the gender ratio? Were some people disproportionately popular? How many of us were recipients of secret desire from our peers?

I set off to answer some of those questions, and I dived into the data for quite a few hours. At the end, it only seemed fair that I shared my findings with you. I would like to emphasize that I did not look into any personal data. In other words, I didn’t look at anyone’s list or matches. Analysis was only done on aggregate data by a program that I wrote. It’s the same way Gmail is able to display email specific ads without a human reading your email.

Okay, so let’s begin. We’ll start with rough user stats. We had 812 signups. These are all actual Penn seniors because we confirmed their school email. Since our class is about 2500, we roughly had a third of the class signup.

Among the 812, 591 users actually submitted their list. This means that 27% of the users had signed up just to see what the site looks like. Feels reasonable. The following graph is a distribution of how many people users had on their list. You can see that the majority of users decided to put down the full 10 people.

Submitted List Size Distribution

Moving on to gender distribution. Gender had to be guessed based on first name. I used an external library to do this. The library uses a giant dictionary of 40,000 first names (not just US names but also international names) and their probability of representing each gender. So given a first name it guesses male, female, or unclear. As a disclaimer, I acknowledge that the male/female gender divide is simplistic, but given that this was not self-reported gender, my options were limited. 

The graph below shows the estimated gender distribution for the 591 users who participated.

Gender Distribution

The gender ratio is close to 1.5 to 1. Not bad.

Now let’s look into the actual selections people made. Again, this is all in aggregates and not pertaining to any individual's list. We will refer to a single name on someone’s list as a single selection. So each user could have made up to 10 selections. In total, 4481 selections were made by the 591 participating users. 1980 of the 4481 selections landed on one of the 591 users. This means that 56% of all selections were ineffective because the person on the other end did not participate in Last Hurrah. Feels like a lot went to waste, but it’s very reasonable given that 591 out of 2400 is a smaller percentage than the current hit rate of 44%.

In the world of Last Hurrah, your popularity correlates to the number of times others put you down on their list. The graph below is a distribution of that popularity. For the technical folk, you can think of it as an in-degree distribution of a directed graph.

Popularity Distribution

More than half of the population lies within the 1 or 2 selections bucket. The true outlier is the one individual who received 31 selections (I don’t know who it is so don’t ask). The area of the graph above sums up to 1276, which means that 1276 people had at least one person who selected them. The maximum that the graph’s area could have summed up to is 2433 since that’s the number of graduating seniors we had on our precompiled list.

1276 out of 2433 feels harsh at first glance because that means 1157 people (48%) did not receive a single selection. Does this mean that 48% of our class is desired by no one? Not at all. Keep in mind that the selections were made by only 591 people, and people choose those who are in their social circles. If anything, 1276 represents the reach of the 591.

If we look at how many among the 591 received at least one selection, it’s a whopping 577. That's 97.6%. Almost everyone. I am honestly surprised by how high that number is. As vulgar as the intentions of Last Hurrah may be, this statistic is incredibly endearing. Everyone is desired by someone.

However, due to the tragedy of misaligned desires, only 111 matches were made, involving 165 people. This means that only 28% of the people who submitted lists received a match. Below is the distribution of the number of matches people received.

Match Received Distribution

Again we have an outlier who received 5 matches. The vast majority received 1.

I want to end this post with a note I received at 2:30am on the last day of senior week. It came through an anonymous support system we had set up for users to reach us.

 Thank you note from anonymous user.

Thank you note from anonymous user.

Those two lines seriously made my week. On that note, I'm ready to leave Last Hurrah in memory lane. Thank you to everyone that helped me with this project, especially the two friends who started it all. 


It’s actually been more than two months since I stopped working on Otter, but I felt the need to record and digest the experience before I completely moved on. This piece is my attempt at doing that.

The Idea

Three weeks before Valentine’s Day, Hunter and I were suddenly struck by the idea that there should exist a dating app where friends suggest dates for each other. There seemed to be many merits to this idea: second degree networks are ideal pools to find love, friends understand us better than algorithms, and friend introductions set better contexts than Tinder swipes. 

After a few days I was totally convinced that the idea was worthwhile. Every dating app out there was a variation of two people judging each other’s profile. This idea offered something meaningfully different. It wasn’t the best time to start a new project, but at the same time I was never going to have an easy shot at launching a dating app if not while in college. So Hunter and I had a long Skype call and decided to put a month of our project time on this new idea.

We named the product Otter. It was a quick decision by Hunter. We didn’t want a name that screamed dating or sex. Otter was a short name associated with a cute animal and had great potential for puns. Our tag line became “help friends find their significant otter,” and our domain

The Sprint

Since we budgeted a month, we planned to spend two weeks building the product, a week marketing the launch around Valentine’s day, and a week operating and monitoring progress before we reevaluated.

For the following two weeks I stopped going to classes, Hunter cut down on sleep, and we locked in to build the service. Every night we would discuss features, Hunter would design pages and write copies, and I would spend the next day trying to bring everything we discussed to life. Given our short timeframe, building a mobile app was out of question. What we worked out was a website that would work well both on mobile and desktop browsers.

We used Firebase as our data store, Facebook for user authentication, and React with Redux for the entire front-end including routing. The complied static HTML/JS/CSS files were hosted on Firebase. In addition, an entirely decoupled Node server ran workers that would recompute friend graphs on new user signup, send match texts, and relay text messages from one user to another. I really liked this architecture. It allowed me to write less backend code and focus on the front-end with attention to server-side details for only stuff that mattered.

Cover art for the signup page was created by Jun Xia. The logo below was created by Reika Yoshino

The Launch

A few days before Valentine’s day, Otter launched with an article in The Daily Pennsylvanian. Over 500 students signed up the first day which greatly exceeded our expectations. We were both overwhelmed and grateful. My FB status that day really says it all:

 FB Status post on day of launch.

FB Status post on day of launch.

For the next week or so, signups and activity continued and we reached ~1000 users and ~500 matches. Success stories from friends kept me excited, and I tried everything to keep the hype going, including postering around campus and pitching at sorority chapter meetings.

 Otter poster.

Otter poster.

However, when Hunter and I sat down to reevaluate a week later, the verdict was clear: despite marketing efforts, signups and matches were decreasing every day. What we learned was that users ran out of things to do. In retrospect this is quite obvious. Even if the service itself has 1000 Penn students signed up, an average new user will only be FB friends with about 50 of them. After making a few matches the user would run out of possibilities, and since the user base doesn’t double every night to change that, they would gradually tune out.

Take Two

Going into Spring Break, we decided to invest another month and build a new version of Otter. We felt that there was potential in the concept judging from the initial outpour of interest, but needed to enable user actions that could sustain that interest.

We decided Otter v2 should be more like a game where pairs were automatically presented. We also decided to relax the requirement that both people need be friends that you know. One person could be up to a friend of a friend. Since you could be deciding on someone you don’t know, we asked users to build profiles. When all was said and done, we essentially had Tinder where you swiped for your friends rather than yourself.

Building the new version took us about another two weeks (essentially the entire Spring break and a bit more). This time around, a lot of my time was spent on writing the backend that would generate the pairs presented to users. When all was ready, we sent out an email to our existing ~1000 users announcing the new version.


Ultimately, version two didn’t work out either. It had an initial outburst of activity for about a week, then it died out. It seemed that for this new game-like version Otter needed to be a mobile app. At the moment, it was as if Tinder was a desktop only web app, something you couldn’t use lying in bed.

Hunter and I talked again, and we decided to stop working on Otter. We weren’t about to build a mobile app and we were anxious of an ongoing project we kept on hold. A part of me wonders if we could have made the original Otter work with just more marketing, but ultimately, that’s not our core strength nor our preferred way of competition.

So that’s it. A short journey that spanned about two months. I'm not exactly sure what to make of it yet, but I did discover the joy of rapid prototyping and that I want to get better at it. Also, a few of my friends did find real love through Otter. That’s something. 

FreeForCoffee Summer


Last semester, Hunter and I were pretty engaged in a side project called FreeForCoffee. It was a tool that helped college groups organize one-on-one coffee chats among their members. At the end of the semester, about 25 groups (around 800 users) were using it with a 20-30% weekly participation rate. 

When time came for me to finalize summer plans, I decided that I wanted to work on FreeForCoffee. I had a few reasons. First, usage stats of outlier groups were really intriguing. These groups had almost half of their members saying “yes” to meet someone new each week. Power users among those groups were participating literally every week. This felt rare. We had built an online tool that could persuade people to actually meet offline, consistently. Second, I had fiddled with building hobby projects long enough that the idea of developing an actual consumer-ready product felt like a welcome challenge. Lastly, I liked the byproduct of the service: people meeting people. I have long believed in the value of face-to-face one-on-one conversations and the idea of facilitating them felt worthwhile. 

I talked to Hunter about this decision, and fortunately he was also interested in exploring the project further. In particular, he wanted to figure out whether companies would find FreeForCoffee useful. His hunch was that employees would be interested in meeting their coworkers. Why? Perhaps because they feel disconnected, are curious about what others work on, or even simply want to make friends. This seemed worth figuring out because, well, there are a ton of companies and companies pay for things they find useful (during school we had covered costs out of our pocket). Thus we agreed to explore the company use case as our primary focus for the summer. Shortly afterwards, we applied and received a $3K grant from the Wharton Innovation Fund, signed a 2BR sublet in Berkeley, and buckled down.

Getting started (Week 1-2)

The first two weeks were spent rewriting the old code. Our old codebase did not have the notion of a group.  So whenever we needed to bring FreeForCoffee to a new group, we cloned the codebase, changed a configuration file that held stuff like group name and signup questions, and deployed it as a separate website with a separate database. This was handy in the beginning, but obviously wasn’t sustainable. The rewrite involved adding concepts of groups and membership, and a setup wizard-like flow that allowed us to create new groups with ease.

During this period, I was working alone in Berkeley (Hunter volunteering in the Philippines). It was a weird feeling, initiating work without a manager or any real deadline. I'd say it was a mixed bag of fear and excitement. During the past semester I had longed for a period of focus, so in a way it was liberating.

Collecting feedback (Week 3-5)

As soon as Hunter arrived, he started scheduling a bunch of meetings with friends, mentors, people he knew, etc. Basically people he thought would have valuable feedback regarding the product or who might be able to use FreeForCoffee at their own company. He had a lot of meetings, up to 4-5 a day all over the Bay Area. During this time, I kept developing the product.

Within these three weeks we launched our first batch of groups: two Penn groups that matched undergrads and alumni in the same city for the summer and Fragomen (an international law firm that Hunter’s dad works at). Because the Penn groups ran in different cities including NYC and SF, our service had to become timezone-aware (i.e. text messages needed to be delivered at the appropriate local time of each user). Fragomen ran in four cities (two of them international) and used FreeForCoffee to introduce employees in different cities for a video call. For this we needed to support text messaging to international numbers and video calls.

We also added some new core features during this time, such as relaying text messages ourselves rather than relying on GroupMe to put people into group chats, sending a confirmation text at the end of the week asking how the chat was, and a dashboard that showed cumulative stats for group admins. The dashboard for a small group would look something like this:

The dashboard design by the way is all Hunter. As a one-man development team, I quickly established myself as the bottleneck of our operation. This meant that Hunter started picking up everything besides writing code — designing, writing copy, and all outbound communication. Reika also helped a ton with design, most notably creating the logo. 

Following leads (Week 6-8)

These three weeks were probably the low point of our summer. We had enough leads in our pipeline, originating from Hunter's many conversations, but nothing that actually crystalized. If we've learned one thing about enterprise software sales, it’s that it takes time. Emails and introductions are forwarded up and down the org chart and each step can be delayed 1-2 weeks even if all parties are happy with the product.

Personally, this time was an opportunity to do some engineering housekeeping. Having added code to the project for 5 weeks, it was accumulating a surprising amount of complexity for such a simple tool. The problem with complexity is that it slows down future development. I started to write automated tests to combat regressive bugs. I also took a day or two to study CSS, worked on some unattended basics like creating a home page, adding SSL and a CDN, and adding frontend validations to our signup form.

 Screenshot of home page.

Screenshot of home page.

One piece of feedback we consistently heard from companies was that it would be nice if we integrated with their calendars to help with scheduling. So we added integration for Google calendar. We also added a weekly spotlight email. 

Breakthrough (Week 9-12)

Four groups launched during these weeks: Altschool (150-person ed tech startup), Flatiron (200-person healthcare tech company), Facebook’s marketing team, and Wharton SF (MBA/Executive MBA program). The verdict is still out for some, but the pilots (most of these launches were 30-day free pilots) went very well with average weekly participation rates around 40%. 

We launched many new features as well: a “who you already know” feature where you select people you shouldn’t be introduced to, an email version of the service, profile images and a “my account” page, an executive assistant feature, a timeline where people can upload selfies of their coffee chats, and a roster where you can view all members in the group. 

We also incorporated as an LLC last week. We still don’t really call ourselves a startup (it’s more like an intense two-man side project), but it was necessary that we incorporate to assign our intellectual property to a legal entity before Hunter starts work at Facebook. Also, it’s important for customers that they’re dealing with a corporate entity, and not just a person.

Looking ahead

It feels like we answered a series of questions this summer. Would companies listen to us, would they be interested, could we convince them to launch, would employees signup, would employees use the product. The answer to all these questions fortunately seems to be yes. We are mainly wondering two things now. First, whether we can keep up user participation in the long term (6 months, 1 year, 2 years, etc.). Second, if and how much companies would be willing to pay us. Wharton SF is going to be our first paying customer. We are amidst or approaching the pricing conversation with other companies. We will have to wait and see how it goes.

So that's it. That was our summer. If you have any thoughts about FreeForCoffee, feel free to reach out to us. We’d love to hear—


Today I wanted to write about a side project called FreeForCoffee that has consumed most of my Saturdays this semester. I have been meaning to write about it for a while, but the project kept morphing, there was always more to work on. But now that we don't have more groups to add in the pipeline and that the features are somewhat mature, it feels like the right moment to look back.

About a year ago, Hunter Horsley and I were grabbing coffee at Hubbub. We both had a project idea that we wanted to try out. So we decided to collaborate on each other's idea. His idea was video journaling. Mine was pooling people's schedules together for lunch matches.


We named the tool FreeAtNoon. The site itself was simply a signup page. A user would input his/her name, email, phone number, and weekly lunch schedule (into a when2meet-like form). Communication from then on would happen over text. We made a short video to explain the tool and ask our friends to sign up.

Every morning of the day you indicated a free time slot (e.g. Tue 1-2), you would receive a text asking if you were down for lunch with someone at that time. If you respond "yes", you were put into a GroupMe with your match. Since both people already confirmed availability for the exact time, you only needed to decide where to grab lunch.

We tried this idea twice. First during finals last spring when we first built it, then again during the beginning of last semester. About fifty of our friends signed up, but it never worked well. Everyone just kept responding "no". At that point we were just spamming our friends, so we shut it down. 

We had two guesses as to why people said "no":

  1. The expected value of meeting someone totally random seems pretty low (even if it is within Penn).
  2. It kind of feels awkward to eat with someone you don't know (as opposed to getting coffee).


We were about to put the idea aside as a failed social experiment. However, Taylor Culliver, the executive editor of The Daily Pennsylvanian at the time, had the idea of piloting FreeAtNoon just within the DP. He thought it would be a good tool for the people in the DP to get to know each other better, especially those in different departments.

We liked the idea a lot and drastically simplified FreeAtNoon to meet this purpose. Lunch anytime of the week became coffee on weekends because optimizing lunch schedules was no longer the point. It worked like this:

  1. Thursday afternoon: You get a text asking if you want to meet someone within the DP for coffee over the weekend. You reply "yes" or "no".
  2. Thursday night: You are put into a GroupMe with your match. The two of you decide when and where to get coffee.

We launched FreeForCoffee within the DP mid October last year. About fifty people signed up, and it got much better engagement. 

Around this time, Joon Choi, a member of Koreans At Penn(KAP), contacted me with the idea of running FreeAtNoon within KAP. He did not know about the DP case, but coincidentally had the same idea. So we created a FreeForCoffee for KAP.

After break, when this semester came around, both the DP and KAP wanted to continue using FreeForCoffee. At this point Hunter and I were curious to know if there would be more groups willing to use this tool. If there were, we wanted to put in more time and automate some of the manual tasks related to matching. Hunter created these slides and sent it out to a bunch of group presidents and social chairs. A surprising number of groups were excited about the idea. Since then, almost every week we've been working on adding new groups and developing the tool. 

Now we have about 25 groups using the service, with 500+ people signed up and getting texts from FreeForCoffee each week. Numbers fluctuate with the tides of the semester, but around 150 people reply YES and get coffee in a given week. The groups range widely from small senior societies and interest groups to larger fraternities/sororities and cultural groups.

Personally, my biggest concern was whether people would continuously use it, after the initial novelty dies out. KAP, one of our biggest and oldest groups, with around 80 users and having ran more than 2 months, still gets more than a third of the members saying YES each week. That fact helps with my worries and keeps me excited. Some people have participated more than eight times, nearly every week. They must be getting some value out of it, right? I am uncertain about many things these days, but people meeting people is probably a good thing. 

Anatomy of FreeForCoffee

For those of you who are interested in how the tool was built, I thought I'd include a short paragraph. The website in built with Ruby on Rails and is hosted on Heroku. We use Twilio for SMS interactions. So the text you send to your groups' number is forwarded to our server by Twilio, we parse the message, and send a response text back to you via Twilio. We use GroupMe's API to programmatically created group chats when admins approve matches on our website. That's it. It is a pretty simple tool. 

What's Next

We don't really know. It was never a business. Just a blown up social tool. We do need to figure out how to cover the costs (mostly sending and receiving text messages). We did start asking for money from new groups for this reason. We probably will add more groups that are interested (if you want FreeForCoffee for your group, leave email us at Maybe we could do something for NSO. We are open to ideas, so if you have any, let us know. That is all I wanted to write. To share our experience and also keep record for myself. Cheers!


Didn't have time to write about it, but CourseGrapher was pretty much my Saturday project last week. It is a tool that enables you to visually browse courses at Penn, graphing courses along fields like course quality and difficulty. A screenshot is worth a hundred words.

The tool was originally created by my friends Greg and Alex. It gets data from the PennCourseReview API, displays the data using the Google Chart API, and uses Firebase as a caching backend. I tried to use it over break to find some courses worth taking outside of my major this semester, and discovered two problems that made the website unusable for me:

  1. The dots only listed course numbers and not course titles. So the "MODERN EGYPT" part of the label in the picture above was non-existent. This meant I had to separately search on for every dot of interest. It was too tedious.
  2. The graph displayed every course ever offered in record (about 10 years of data), which meant that most courses were either not offered this semester or even no longer offered at all. For someone just looking for a course to add this semester, the noise was too big.

I set off to improve those two things. I made sure that course title information was cached (in firebase) and displayed. Also that you could filter by courses offered this semester, which is the "Display offered courses only" checkbox in the top right. The course filtering was the bulk of the work. My code contribution for that is here. It's not complicated at all, but just navigating the nitty gritty details of the unintuitive Penn Registrar API was painful.

A filtered view of all courses offered in the history department looks like this (I was really happy when I got this to work):

I shared it with my Facebook friends and some other groups, and it got some traction and usage. Glad I did it. Also before I end, I must give Greg credit for helping me understand the existing codebase. His help was instrumental.


Santo Domingo

I spent the entire winter break in Santo Domingo. During that time I didn't feel the need for Saturday projects, because everyday was essentially a Saturday. Regardless, I wanted to write a short post to sum up the varies things that happened in this gap of time.

I went to Santo Domingo because I wanted to get away from campus, but avoid the comfort and routine of home, in order to engage with myself. It offered the cheapest flight and living costs that I could afford. We (I persuaded a friend to join me) found housing via Airbnb. It had a good supermarket near by. Piggybacked on wifi from an office downstairs. It was all we needed. 

For the first week we did do some touristy stuff. Some photos.

Then I alternated between mind numbing consumption of entertainment to serious introspection and a lot in between. A list of things that I did in no particular order.

  • Watch 'Breaking Bad': I've heard such good things about this tv series, but I ended up not liking it too much. Stopped after season 2. 
  • Read books.
  • Started a book log.
  • Read comics: Don't know what it is called in English, but '진격의 거인' is good. 
  • Took a Udacity course on 'Programming a Robotic Car': Absolutely amazing. Sebastian Thurn is the perfect person to teach this topic. It was the first time I actually finished a Udacity course.
  • Read a six volume Korean fantsy/martial arts fiction non-stop for a whole day: Non-stop reading for pure entertainment should be a thing. The level of delusion you reach towards the end is quite unique.
  • Analyzed my spending for the last semester: Didn't know this but Bank of America lets you download a long period of spending in plain CSV. Analyzed my financials for the first time to figure out where the hell I spent all my summer earnings.
  • Figured out a diet to achieve a relatively healthy existence with minimal cooking: Boiled sweet potatoes, plain potatoes, and eggs; apples; tomatoes; oatmeal with granola; PB&J sandwich; canned tuna, crackers, and cheese; milk; greek yogurt; bananas. Everything is boiled in bulk or prepared on the spot. Survived on this combination for two weeks.
  • A lot of Skype calls and emails catching up with old friends. 
  • Most importantly, I tried to figure out what I want to do with my life.



I have not been posting to this blog for a whole month, and what have I been doing all these Saturdays? Robockey. It has been quite a journey and I'd like to take this Saturday to record and share my experience. 


Robockey is a autonomous robot hockey tournament held at UPenn each fall, serving as the final project of Mechatronics (MEAM 510). This year was the 6th annual tournament with 24 teams contending. Below is this year's teaser. 

The Game

The game is quite simple. You are given a rink, a puck, and stars. When given a 'PLAY' command over wireless, you need to find the puck following its IR light, and drive to the opponent's goal locating yourself by looking up at the stars with a mounted camera. The physical requirements is that your robot needs to fit into a 15cm diameter cylinder and be no taller than 13cm. 

Anatomy of the Robot

Before the robot can run any code and attempt to be smart, it needs a body and a nervous system. We will discuss those first.

The Structure

The basic structure is laser cut acrylic connected by standoffs. 

IR Sensors

We put the sensors (phototransistors) at four corners of our sphere, each corner having 3 sensors shorted together to have a wider coverage. One sensor was added at the mouth of the puck holder and another on the top plate facing down to determine whether we are holding the puck or not. We 3D printed a bottom piece that would hold the IR sensors and cords, while pressure fitting nicely with the bottom plate.

Motors and Wheels

We used Pololu motors and BaneBot wheels. 

Batteries and Holder

We used two lithium ion batteries to power the motors, and a 9V battery to power the microcontroller and IR sensors. A battery holder was 3D printed.


From the left: regulator circuit that provides regulated 5V for all other necessary circuits; IR sensor circuit; motor driver circuit; microcontroller circuit.


Everyone used the same microcontroller (M2) that was custom designed for the class. It's quite small. 

Putting It All Together

The Brain of the Robot

Within a team of four, my role was programming. Writing software for a microcontroller was quite new for me. Dealing with timers, analog-to-digtial conversion, hardware interrupts,  packet-level wireless communication, pulse modulation for variable voltage output, etc. All such functionalities was a struggle to get working in the beginning, but ultimately my job came down to writing control logic. The task of processing the IR sensor and camera inputs to create two output voltages, one for each motor. The solution was to build two PD controllers: one for puck finding and the other for goal finding.

For example, when finding the puck, you can first estimate the theta location of the puck relative to your robot by processing the IR sensor input. Then a simple proportional controller would create a rotational motor voltage proportional to the theta. This would make the robot oscillate in the direction of the puck but never quite get it. If you add a derivative term, effectively dampening the oscillation, you can make the robot rotate to the direction of the puck and stop right there. Now adding a forward speed term to the output motor voltage, that is greater when the theta difference is smaller, you can have a robot the follows the puck quite beautifully. Below is a video of me tuning puck finding, the derivative needs to be higher (you will still notice oscillation), but you will get the picture. The code I wrote for puck finding can be found here.

Driving to the goal is very similar. You just need to use the angle difference between the robot orientation and the direction of the goal as your theta. Of course there is more nitty gritty details such as smoothing a curve when with the puck, or boosting rotational force when stuck on a wall, but in essence, it is the same PD controller. After the PD controllers were built, a lot of the work was manual testing to get the right proportional and derivative constants to use in the controllers. These constants needed to be figured out for each robot, since each of them had different motors and weight. 

The Tournament

24 teams competed in the tournament last Thursday. It was double elimination. We won the first two matches, lost, then won, then lost again. We were eliminated right before the semi-finals. A few goals from the tournament. 

We put in quite an amount of time into these robots, and the fact that we still did not have a chance at winning says a lot about the caliber of the tournament. I have never seen so many students voluntarily put in endless all-nighters. It was tough competing in a graduate class as an undergrad with less time and experience, but I'm very glad that I did. Could not have imagined to learn more about robotics in a month.

The Team

This post would not be complete without mentioning the awesome team that I worked with. Max, who single-handedly modeled and built most of our robots; Ryoo, who designed and soldered all the circuits; Casey, the only person that worked on all mechanical, electrical, and software, pushing everyone forward; and finally, our coach Steve who guided us through critical moments. Thanks all, it was great.

Project VORB


This week I happened to read this article "Yelp And Michelin Have The Same Taste In New York Restaurants". In the article, Nate Silver (the author) argues a strong correlation between the recent Michelin stars for NY and his own scores based on Yelp data for the same restaurants. The fact that Yelp data when analyzed correctly could generate a ranking on a par with that curated by hundreds of tasting professionals was intriguing. If so, then we could do the same analysis and create a list for Philly (where Michelin sadly doesn't operate), right? Greg and I decided to do just that.

 Working out of our 3rd floor living room. Photo cred: Abhishek.

Working out of our 3rd floor living room. Photo cred: Abhishek.


VORB is the score Nate Silver used in his analysis. It stands for Value Over Replacement Burrito. Its original inspiration is VORP (Value Over Replacement Player), a score used in baseball to estimate the added value of a player compared to a baseline replacement player. The 'B' in VORB stands for 'Burrito' because Nate Silver first developed this score to figure out the best Burrito in America (formula for VORB is in the footnotes). After staring at the the VORB formula for a while, we decided to considerably simplify it into our own VORB score as follows:

VORB = (average rating - 3.3) * ln(total number of reviews) * (number of reviews since 2012 / MIN(number of days open, number of days since 2012))

Conceptually, this VORB score can be understood as (average opinion) * (confidence) * (recent popularity).

Data Mining

Getting the necessary data from Yelp was the bulk of the work. For Nate Silver, I imagine it was easy. It is likely that Yelp just gave it to him because he is a popular data journalist. For us, we had to earn it bit by bit. 

The first task was to get a long list of restaurants we wanted to consider. This was done using the Yelp API, asking it for a list of restaurants within a bounding box location. The box that we gave was something like this:

 Most areas of University City and Center City was included.

Most areas of University City and Center City was included.

Yelp's API only gives a list of 20 restaurants at a time, but we kept asking (with an added offset) until we had 500. We thought this was an adequate sample size because our goal is to find something like the best 30 restaurants in Philly, not a 1000. Also, because Yelp returns the list of restaurants in their own ordering logic, taking some faith in Yelp, we were able to assume that the best restaurants will be in the 500 sample size. The code we used is here.

After we had the list, we saved the html pages that held the reviews for all of these restaurants from Yelp's website. The process is quite tedious but doable. Each yelp page displays 40 reviews, so we kept requesting the next tab until we had all reviews for a specific restaurant. The average number of reviews for our sample of 500 restaurants was around 200. That means we scraped about 100,000 reviews by making about 100,000/40 = 2500 requests. For a computer this is a reasonable task. The only issue was that Yelp kept blocking us after we made a few dozen requests. In the end, we just kept running the program until we had all 100,000 reviews.

It is worth noting why getting ALL the reviews was necessary. It was because Yelp only provides average ratings in 0.5 increments (I hadn't realized before either, but check your Yelp app). For our purposes we needed to distinguish between a rating of 4.3 and 4.5, thus we had to get all the reviews and compute it ourselves.

Once we had all the html pages that held all the data we need, it was only a matter of parsing and computing VORB scores


The top 20 restaurants are shown below. A list of top 100 can be found here. 'Recent reviews' indicate the number of reviews since 2012. To remind you again,

VORB = (average rating - 3.3) * ln(total number of reviews) * (number of reviews since 2012 / MIN(number of days open, number of days since 2012))

=  (average opinion) * (confidence) * (recent popularity)

Name: Zahav
Avg rating: 4.46
Total reviews / Recent reviews: 1010 / 694
Open date: 2009-05-18
VORB: 1.16 * 6.92 * 0.68 = 5.46
Name: Barbuzzo
Avg rating: 4.29
Total reviews / Recent reviews: 973 / 703
Open date: 2010-12-04
VORB: 0.99 * 6.88 * 0.69 = 4.70
Name: Amada
Avg rating: 4.25
Total reviews / Recent reviews: 1255 / 659
Open date: 2006-04-08
VORB: 0.95 * 7.13 * 0.64 = 4.37
Name: Monk's Café
Avg rating: 3.99
Total reviews / Recent reviews: 1381 / 826
Open date: 2007-07-15
VORB: 0.69 * 7.23 * 0.81 = 4.05
Name: Vedge
Avg rating: 4.44
Total reviews / Recent reviews: 557 / 538
Open date: 2011-11-21
VORB: 1.14 * 6.32 * 0.53 = 3.80
Name: Luke's Lobster
Avg rating: 4.33
Total reviews / Recent reviews: 319 / 319
Open date: 2013-06-05
VORB: 1.03 * 5.77 * 0.64 = 3.78
Name: El Vez
Avg rating: 3.82
Total reviews / Recent reviews: 1510 / 1004
Open date: 2005-08-22
VORB: 0.52 * 7.32 * 0.98 = 3.74
Name: Tommy DiNic's
Avg rating: 4.16
Total reviews / Recent reviews: 1003 / 635
Open date: 2007-02-20
VORB: 0.86 * 6.91 * 0.62 = 3.71
Name: Bleu Sushi BYOB
Avg rating: 4.31
Total reviews / Recent reviews: 90 / 90
Open date: 2014-06-22
VORB: 1.01 * 4.50 * 0.75 = 3.41
Name: Hai Street Kitchen & Co
Avg rating: 4.02
Total reviews / Recent reviews: 129 / 129
Open date: 2014-06-10
VORB: 0.72 * 4.86 * 0.98 = 3.40
Name: HipCityVeg
Avg rating: 4.21
Total reviews / Recent reviews: 516 / 516
Open date: 2012-05-15
VORB: 0.91 * 6.25 * 0.58 = 3.29
Name: Nan Zhou Hand Drawn Noodle House
Avg rating: 4.18
Total reviews / Recent reviews: 838 / 559
Open date: 2009-02-24
VORB: 0.88 * 6.73 * 0.55 = 3.25
Name: Morimoto
Avg rating: 4.28
Total reviews / Recent reviews: 1058 / 466
Open date: 2005-05-25
VORB: 0.98 * 6.96 * 0.46 = 3.11
Name: Vernick Food & Drink
Avg rating: 4.55
Total reviews / Recent reviews: 317 / 317
Open date: 2012-09-17
VORB: 1.25 * 5.76 * 0.42 = 3.00
Name: Talula's Garden
Avg rating: 4.29
Total reviews / Recent reviews: 537 / 481
Open date: 2011-07-05
VORB: 0.99 * 6.29 * 0.47 = 2.93
Name: Charlie Was a Sinner
Avg rating: 4.18
Total reviews / Recent reviews: 110 / 110
Open date: 2014-05-16
VORB: 0.88 * 4.70 * 0.70 = 2.91
Name: Dizengoff
Avg rating: 4.27
Total reviews / Recent reviews: 52 / 52
Open date: 2014-08-09
VORB: 0.97 * 3.95 * 0.72 = 2.77
Name: Abe Fisher
Avg rating: 4.47
Total reviews / Recent reviews: 32 / 32
Open date: 2014-09-01
VORB: 1.17 * 3.47 * 0.65 = 2.65
Name: Han Dynasty
Avg rating: 4.04
Total reviews / Recent reviews: 804 / 542
Open date: 2010-08-05
VORB: 0.74 * 6.69 * 0.53 = 2.62
Name: Parc Restaurant, Bistro & Cafe
Avg rating: 3.93
Total reviews / Recent reviews: 1021 / 613
Open date: 2009-01-20
VORB: 0.63 * 6.93 * 0.60 = 2.62

I imagine the VORB score could be better. Also, this list could be adjusted taking into account other important factors such as price. If you have ideas, your welcome to take a stab yourself. All the code and data is here.



Startup School

Spent the entire morning and afternoon at Y Combinator's annual speaker event 'Startup School'. This was the line up of speakers for the day. I stayed until the 4pm break. 

My Take On Talks

I appreciate speakers who give a detailed account of their story in its purest form. When a story is over processed into "lessons" learned there is a high probability I will be thrown off. There are a few reasons. First, it is likely they extrapolated the wrong lessons. When you take the results of a personal (single data point) example and generalize, people often neglect that each situation is unique to themselves. Second, when watered down and abstracted, most lessons sound the same. You need to focus on the product, listen to your users, identify the mission, etc. It is not the abstract lesson, but the raw data point that is interesting. In sum, I wish speakers just try and tell their story honestly and trust their audience to do the processing. In that regard, Indegogo's story was over processed; Pebble's was much better. People might disagree.

Favorite Speaker

I usually prefer talks rather than interviews because the speaker has the opportunity to build a coherent narrative, but despite that, my favorite speaker of the day was Jan Koum, founder of WhatsApp. His attitude of no bullshit focus on the product is liberating and inspiring in its own way. A highly recommended watch.

Below is a picture I took of the Jan Koum interview. The Sequoia guy on the left is sadly irrelevant. 


I've been meaning to build this bar for a while. The plywood and 2x4s were bought in the beginning of the semester. I just didn't have time and mental capacity to squeeze in a full-day project. Today I finally got to build it!


My room has this awkward structure in the middle of the room (seen in the first image below, click to see more photos). Without further explanation, I'll just say that it is useless to me. Beginning of the semester, a friend nudged me to build a bar, which I thought was a fun idea, so building it on top of this structure seemed like the perfect thing to do.


We used plywood and 2x4s previously bought and cut to dimension from Home Depot. 40' 2x4s served as pillars and plywood covered the top and sides. Drilling and building with wood is not too hard if you have the right screws and angles for the job (I had to go for a quick run to the local hardware store to buy 3/4' screws that would fit our steel angles). We (Greg & I) built the two tops and supporting structures first, then put the sides in and finalized the structure. Figuring things out in the beginning took a while, so getting things done structurally took about 3-4 hours. We went out for a late lunch, came back and started working on sanding and staining, this part also with the help of Danny. 

Sanding is straight forward. Staining is slightly trickier. If you are not familiar, staining is the process of treating the wood with a paint-ish substance to give it a better color (also some protection). Before you stain, you need to pre-stain, with a different substance that allows the later stain to be absorbed evenly, to avoid blotchiness. Then you stain, by painting stain on top of the wood then rubbing off the right amount with cloth. If you want a darker color, which in this case I do, you must first wait for it to dry (about 10 hours) then apply another coating. After staining, you can add some additional water-resistant protection by applying something like polyurethane. The second round of staining and the application of polyurethane will need to happen tomorrow.


I did not know that plywood had a good side and a worse side. An "AC grade plywood" means that one side is A (good quality) and the other is C (mediocre). We consistently used the C side by happenchance. While I'm happy that we were at least consistent, I wish we had used the A side.

Special Thanks To

Greg and Danny who built it with me. Greg especially for all the tools.