Welcome to
Caches to Caches

This blog is devoted to the broad interests of Gregory J Stein, which includes topics such as Numerical Modeling, Web Design, Robotics, and a number of my individual hobby projects. If there's any article you would like to see, or something you've been wondering about, be sure to let me know on Twitter.

When I first arrived at graduate school, one of my professors shared his experiences on managing students. He told us that his students tend to fall into three bins of roughly equal measure: one third of students earn their keep, one third don't, and a final third produce enough work to fund both themselves and the underperformers. The implication was clear: Work hard and make sure you don't end up in the bottom third.

My graduate school experience was in the sciences, in which my advisor was my boss and paid the bills. The social sciences have less direct management, but these lessons about effective management are no less important.

What has bothered me the most in the years since is the idea that these categories are fixed. That students who find themselves in the bottom third are stuck there. That the advisor is somehow abstracted away from how their students perform. How they treat underperforming students is what separates the good mentors from the bad.

Too many times, I have seen hard-working, brilliant students overlooked by their advisors because they didn't show enough immediate progress. These students slowly receive less attention, despite often needing more, and their access to experimental resources wanes. Students who adapt quickly to graduate school or get lucky with an early project win favor, while the remaining students may not receive another chance to succeed. For advisors, this bottom third mentality is a self-fulfilling prophesy.

Mentorship is difficult. As educators and mentors, it is our responsibility to work to make sure that every student succeeds. Playing favorites isn't always conscious, but good mentors look for other ways to engage with all their students and work with them to help them succeed. Here are some management lessons that good mentors keep in mind.

Different people may need different management styles to succeed. There is no one-size-fits-all approach to management. I have seen too many faculty manage as if every student was a cookie-cutter image of themselves. Yet everyone has different interests, different reasons for having pursued their career path, different strengths and weaknesses. Good advisors recognize that their students may need different levels of external pressure: while some students may need to meet multiple times a week, many are extremely productive for weeks at a time without supervision.

Even if students are managed differently, it is important that different standards do not apply to different people. Particularly in an academic environment, it is important to ensure that criteria for completing degree requirements or passing classes be clearly defined.

A lack of early success does not imply a lack of future success. I changed fields before earning my PhD. I was helpful enough to other students in the lab, but, for a while, I didn't have the domain-specific knowledge to make independent progress towards my independent research goals. Some students take longer to spin up than others.

Burnout is different from laziness. Sometimes, burnout happens. It's not great, and we should do what we can to identify the signs of burnout. Everyone needs downtime. Constant sprinting towards deadlines and not taking time to rest and recover creates incredible stress, and we all react to stress in different ways. It is important to ensure that students have guilt-free time to relax.

Changing labs/managers/jobs is not a sign of failure. We are imperfect. Even the best advisors have trouble managing some students. All of us—advisors and students—should be willing to have an open conversation about what's not working. I changed advisors during my PhD; the lab's research was simply not what I wanted to work on for the next four or five years. Too many friends of mine didn't listen to that voice in their heads, and struggled through much of their PhD as a result. Advisors should make the option of switching to another lab clear, but do so without pressuring the student.

The book Difficult Conversations is a great resource for "how to discuss what matters most."

As always, I welcome your thoughts in the comments below and on Hacker News.

As an academic, I read a lot of papers. Part of my job is retaining what I read, since deeply understanding the work of others and building upon it is one way I come up with new research ideas. When it comes time to sit down and write a paper, I need to contextualize my ideas in a Related Work section: I include a discussion of other research that has touched upon similar problems or inspired my own work. Over my years in academia, I have settled on an annotated bibliography to manage my own knowledge base of papers.

When I started collecting papers and other references, I added annotations to PDFs. However, this doesn't scale well, since my comments and the documents themselves were difficult to search and lacked easy-access to important metadata. I used Zotero for a while, but that didn't mesh with my workflow either. It was nice enough, but still required that I leave my Emacs environment. In addition, I don't really need the ability to markup PDFs. For a paper I think I may want to find again, I only need a couple of things:

  • A paragraph-long summary of the paper.
  • A BibTeX entry for the paper.

An annotated bibliography is perfect for this. Everything is in one place. It's easy to edit and share. And I can manage the entire thing from within Emacs with ease.

Five years ago, my life exploded in complexity. I had just started a new position in a new field. I was planning my wedding. And my inability to say NO to anyone and everyone had culminated in my serving on the board of three graduate student organizations. Inevitably, cracks began to form, and my finite brain started to lose track of tasks. My calendar was sufficient to ensure that I wouldn't miss meetings, but I would often only prepare for those meetings at the eleventh hour. My productivity and the quality of my work both suffered. Something needed to change.

This guide is devoted to a discussion of the organizational system that I have honed in the time since. With it, I have found that my time is spent more wisely. Better organization means that I can consciously devote effort where it is needed early on, as opposed to scrambling to keep up, and deliver higher quality work without expending more energy.

Many of the ideas presented here derive from the Getting Things Done methodology, but adapted and expanded to meet my personal needs.

You too can streamline your process. This guide is meant to serve as an example of how you might reorganize your workflow and find order through the chaos of your busy life. Yet different lifestyles have different demands: what works for me may not work as well for you. As such, I do not expect that you will replicate this system in its entirety. Instead, I hope you will take inspiration from my system and use elements of it to build a workflow that works for you.

This document is broken into three main parts:

  • Goals: in which I dive into more detail about what it is I have tried to accomplish with my system.
  • Framework: in which I describe the core ideas and systems I employ to record information and keep track of my tasks.
  • Tooling: in which I discuss the tools—including hardware, software, whatever—that I use to implement the framework.

In addition, I conclude with two sections in which I describe what I see as limitations of my existing system and some other technical details.

Let's dive in.

As a researcher at the intersection of Robotics and Machine Learning, the most surprising shift over my five years in the field is how quickly people have warmed to the idea of having AI impact their lives. Learning thermostats are becoming increasingly popular (probably good), digital voice assistants pervade our technology (probably less good), and self-driving vehicles populate our roads (about which I have mixed feelings). Along with this rapid adoption, fueled largely by the hype associated with artificial intelligence and recent progress in machine learning, we as a society are opening ourselves up to risks associated with using this technology in circumstances it is yet unprepared to deal with. Particularly for safety-critical applications or the automation of tasks that can directly impact quality of life, we must be careful to avoid what I call the valley of AI trust—the dip in overall safety caused by premature adoption of automation.

As an academic, I see a lot of talks. In general, good presentations tend to be based on a good slide deck; even very capable speakers have a tough time reaching their audience when their slides are a mess. One common pitfall I often see is that many researchers will take figures or diagrams directly from their papers, upon which the talk is usually based, and paste them into their slides. It's often clear to the audience when this happens, since figures in papers tend to be rich with information that can be distracting in a talk. My advice:

Avoid using unedited paper figures in talks.

At the end of every year, I like to take a look back at the different trends or papers that inspired me the most. As a researcher in the field, I find it can be quite productive to take a deeper look at where I think the research community has made surprising progress or to identify areas where, perhaps unexpectedly, we did not advance.

Here, I hope to give my perspective on the state of the field. This post will no doubt be a biased sample of what I think is progress in the field. Not only is covering everything effectively impossible, but my views on what may constitute progress may differ from yours. Hopefully all of you reading will glean something from this post, or see a paper you hadn't heard about. Better yet, feel free to disagree: I'd love to discuss my thoughts further and hear alternate perspectives in the comments below or on Hacker News.

As Jeff Dean points out, there are roughly 100 machine learning papers posted to the Machine Learning ArXiv per day!

There's a story I retell about an incredibly talented researcher friend of mine from time-to-time. Though the exact details elude me now, since it was a number of years ago, the story goes something like this:

My friend and I were on our way to lunch when we ran into someone he knew in the hallway, who we'll call Stumped Researcher. He was having some odd issue with a measurement apparatus he'd built; we were all physicists, and every lab has their own custom setup of sensors, signal analyzers, etc. to probe physical phenomena. After a lengthy description, stumped researcher was clearly distraught, unable to collect any data that made sense, indicating that something was wrong with his setup. Without ever having seen the measurement setup and without an understanding of the experimental goals, my friend asked a question that astonished me in its specificity, wanting to know the brand of lock-in amplifier that was being used. Stumped researcher (a bit lost, having not mentioned that any lock-in amplifier was even being used) didn't remember. My friend responded "Yeah, the older model lock-in amplifiers produced by $COMPANY_NAME ship with cables that are known to fail sometimes. I'll bet that's the problem." Sure enough, a couple days later, upon running into no-longer-stumped researcher, that was indeed the problem; a quick change of cable remedied the issue.

To this day, it remains one of the most incredible instances of remote problem-solving I've ever seen. The key enabler of this ability: experience. I know that my friend thought that might be the problem because he'd seen it before in the wild. Tinkering was his passion, and with the number of things he'd bought online, taken apart, and sold for parts, he'd no doubt seen it all. And yet, despite knowing how the trick was done, it certainly seemed like magic to me at the time. I find good doctors also have this ability, to have such a deep understanding of the entire body system that a problem in one region causes them to understand. Recently, it occurred to me that I occasionally do the same thing to the undergraduate researchers I work with, asking an obscure question about their code or data or algorithm and then remotely solving the problem that's vexed them for days.

The title is an allusion to the perhaps overused Arthur C. Clarke quote: Any sufficiently advanced technology is indistinguishable from magic.

I have the privilege of being surrounded by brilliant scientists, philosophers, and thinkers of all kinds, so I witness this phenomena with relative frequency. Yet every time I see someone who surprises me in this way, I try to remember that these circumstances don't just happen: only though dedication to a craft can one gain the depth of understanding necessary to demonstrate this level of mastery. The pull of impostor's syndrome is real, but I try to be inspired by these moments whenever I can. Perhaps someday I'll feature in someone else's anecdotes.

As always, I welcome your thoughts (and personal anecdotes) in the comments below or on Hacker News.