Like Rome, great products are not built in a day

I like small, frequent product releases. These tend to be less risky, offer more opportunities to learn, and provide potential value sooner. However, this does not mean mindlessly pursuing quick wins, or even that these quick releases will on their own be particularly monumental. There are no shortcuts to making an impact: one still needs time, focus, and patience.

Here I go through three different examples of good things you cannot hurry: the cumulative return on investment, building a comprehensive feature set, and the number of iterations you need to go through to learn and ‘get it right’, as it were.

gray concrete building
Rome wasn’t built overnight, and neither were your favourite products.

The compounding returns of iterative refinement

Smaller releases can vary in impact, but a single improvement is rarely a game changer. However, if you keep your focus on a certain area the cumulative improvements can add up significantly. Below is a nice example of this from Pitch, a presentation software which prides itself on its slick design and user experience. Having used them for a while, this is certainly warranted, but they did not get there from one day to the next.

This tweet from Pitch elegantly illustrates how much of a difference six months of targeted improvements can make.

None of the improvements shown in the tweet will on their own have caused much of a stir, but six months down the line one can really see the difference. It demonstrates how sustained focus on a problem can lead to vast improvements in a short amount of time.

There are two key takeaways from this. The first is that it is important to resist the urge to switch from one problem to the next, as this will prevent you from solving a particular problem very well. You should therefore take some care in identifying the right problem to solve.

This takes us to the second point. Know how an individual improvement fits within a larger constellation of work. If one had looked in isolation at any of the minor improvements in the example above, none of them may have been implemented as the benefit from each one would have been deemed too small to make it worthwhile. However, by focusing on the underlying problem, one can start to see how they all jointly address it, and the cost-benefit analysis shifts. The whole is indeed greater than the sum of its parts.

Getting to a comprehensive feature set

As Ben Horowitz writes in The Hard Thing About Hard Things, there usually isn’t a single silver bullet that will solve a hard problem, but rather that it takes ‘lots of regular bullets’. It does mean though that you need to be in it for the longer haul if you intend to make more meaningful progress on harder challenges.

One example of this are the several ways in which people can extract their data from ChartMogul. ChartMogul calculates revenue data for SaaS businesses and, as I wrote at the time, a lot of the value of such a data product comes from people being able to export data and put it to work. However, different people will have different preferences on how they want to extract their data, and they may well require different implementations.

For instance, CSV exports obtained via the UI are easy for anyone to download, but it is a process harder to automate or to interact with programmatically. Conversely, the API endpoints fit this use case a lot better, but these require technical know-how and more work to set up. ChartMogul also provides direct integrations with different tools which can remove some of the technical complexity, but it doesn’t help you if the integration to your particular tool does not exist. An integration to Zendesk will be of little use to someone using Intercom instead.

ChartMogul caters to these different customers and use cases, but it took years for these different options to be created, and they are still being improved and added to. Remember, it is a marathon, not a sprint.

Learning through iterations

Eric Ries’ book The Lean Startup popularised the cycle of ‘build, measure, learn’. The loop is fairly self-explanatory: you build something, you measure how it does, you learn from this, and then you repeat the cycle. The more times you iterate, the more you learn and the better the product ends up becoming. However, one shouldn’t let the limited number of steps in the loop deceive you: the real world doesn’t tend to be as tidy as this. You may have to go through the loop multiple times before you achieve the desired effect, and other work will also get in the way of doing multiple iterations. Furthermore, sometimes it can take a while before you start seeing the impact of your work, which also postpones how quickly you can learn. Perhaps you made a new API endpoint available to your users, but the technical implementation required on their end may cause you to wait for longer before they are able to interact with it.

All of this is to say that the learning itself will always take time, and sometimes there is a limit to how fast you can validate your assumptions.

Enjoy the journey

Building products takes time, especially good ones. No great product got there overnight, and that comprehensive set of features you see in mature products wasn’t all there from the start (think of how you couldn’t copy and paste on the first iPhone). Furthermore, and you can call me cynical if you want, but there will always be bugs and issues to address. This makes it even more important to celebrate your victories along the way and to see how much progress you have already made.

I will leave you with a final tweet.

While not all products become great with time, no great product got there overnight.

Charting the data: Three takeaways from working on an analytics product

If a picture is worth a thousand words, then a good chart is definitely a match for even the most verbose images. This might help explain why analytics are everywhere.

Take Twitter as an example. It would never be as popular as it is without showing its users how well their tweets are doing. Analytics are central to Twitter and other social media companies: they may be fairly crude, but they are also crucial for triggering that dopamine hit that brings people back.

The ubiquity of analytics hides the potential complexity underneath, especially when you start to consider products where the analytics are more front and centre. Rather than Twitter, think of something like Hootsuite’s Analyze, which offers vastly more information and –hopefully at least – insights. Creating and maintaining pretty charts isn’t as easy as it seems, and I will be touching on three things that weren’t salient to me before I started working in the space.

#1: The charts are just the tip of the iceberg

This is a lesson I learned already in my research days, before I moved into product management. After completing my data collection, I would be keen to run my analyses and contribute to scientific knowledge. Yet, I quickly learned that there was a lot of work still to do before the data would be ready for analysis. I had greatly underestimated the amount of data processing that was required. When I eventually got to run the analyses themselves, it turned out to be anticlimactically quick.

Like a good iceberg, there is plenty going on under the surface
About ninety percent of an iceberg is under the surface. Think of these proportions the next time you see a dashboard.

The data crunching can be divided into two stages. The first is the initial processing, where you get the data into a state in which they can be calculated, and the second is doing said calculations. If we look at the latter stage first. The metric you are going for will determine the complexity of the calculation. Working out something like the number of blog post views in a given month is relatively straightforward once you have the number of visits and when they took place. In contrast, calculating the customer churn rate of a SaaS business will be a bit more of an involved process. You need to take into account how many customers you had at the start of the month, how many canceled that same month, but only those cancelations who were for customers who were with you before the start of the month. You may also want to control for any cancelations that were followed by reactivations by those customers in that same period.

Whether you are dealing with blog post views or customer churn rates, the calculation itself may be the easy bit. Imagine you want to know which are the top five countries your users are based in. If you already have the country information for each one, this task is fairly easy. Now imagine you only have their GPS location. You will first need to figure out the country a user is in. If we want to make it really complicated, let’s assume you only have the raw GPS data . The amount of processing to determine the countries just increased manyfold.

Let’s hope you don’t have to do the equivalent of determining your users’ locations from scratch, but bear in mind that some of your product’s allure will come from information you are providing which isn’t easily available elsewhere. Doing the hard work for someone else is one of the reasons people will come to your product. Just make sure you are providing metrics that people actually use and care about.

#2: The virtues of transparency

Since your data processing is one of your competitive advantages, you might be tempted to keep it under wraps, as you might otherwise feel like you are revealing the ingredients for your secret sauce. However, opaqueness is not a good way to build trust in your numbers. After all, you aren’t providing numbers pulled out of nowhere, but rather something that would be too much work for your users to do themselves. The way in which it is calculated is not going to be a trade secret.

Classified page 5 newspaper selective focus photography
“I could tell you how I calculated your numbers, but I am afraid that’s a secret. Don’t worry, just trust me.”

Moreover, how different metrics are calculated can be a source of heated debate. The different methods will tend to have their own strengths and weaknesses, so you should make it clear which method you are using. People will then also know how to better interpret the data you are providing. Think of the churn rate example from earlier. People are better served if they can follow your approach.

A good example of this in the world of SaaS metrics is around ARR (annual recurring revenue) and how it is often used interchangeably with annual run rate. The confusion is not surprising considering the shared initials. The former is the recurring revenue which comes from your annual plans, whereas the latter is how much recurring revenue you would get from all your plans – regardless of their duration – in a year (or MRR x 12 for short). One can argue about the merits of both and when each one useful, but the main thing is not to confuse the two. That being said, it could be worse… I once heard a podcast state ARR as average recurring revenue.

As a final point on this, the building blocks of your metrics have value of their own. If you are assessing the loading time of your application, the average time will be interesting, but you also want to dig into the numbers that make up this average, and so identify the major sources of latency. This is key for identifying which performance improvements will yield the highest return on investment. DataDog do this well, as they make it really easy to explore what their aggregate charts consist of.

#3: The challenges of improvement

Like any product out in the world, there will always be things you would like to add or change. For an analytics product this is harder than you might expect, so be careful if you are someone who likes to “move fast and break things”, as some things are harder to repair than others.

When you tweak the way you calculate a metric in order to account for a new scenario, you are in a bit of a tricky situation. If you reprocess pre-existing data then your previous values will also change (to which people don’t take kindly when this happens out of the blue). If you don’t reprocess, and the changes apply only going forward, then you may have inconsistent numbers, and it is hard to make it clear why your January numbers are different from your February ones. Also, if you are actually improving the calculations, should you not provide this improved data?

There are multiple ways of dealing with or mitigating these challenges, including adding new data settings, coordinated data reprocessing, and being transparent on how the numbers are obtained (as mentioned above). A clear data processing pipeline will also make any improvements easier to implement. Yet the best cure is prevention, so plan with care how you process your data and calculate your metrics.

Final thoughts

Working on a data product can present its own unexpected challenges even for those who have worked with data before. Not only do you have to make the underlying complexity clear and actionable, but you also have to provide the right amount of depth and transparency. All the while, you are dealing with a constant flow of ever-changing data. As we have seen it can get complicated quickly!

But it’s worth it. Information is key for empowering people to make the best decisions, and being able to provide those insights is a rewarding thing indeed.

I can’t read your mind: Four things I learned from psychology

Whenever I tell people that I have a background in psychology, the most common reactions I get tend to be “Oh my god, you must be analysing me!”, “Are you reading my mind?” This is an understandable reaction, but the answer in both cases is no.

This informal polling (if one can call it that) is indicative of the misconceptions about what the field entails. I do not intend to start a philosophical discussion of what Psychology ‘really is about’, but since mind-reading did not make it on my list of skills, I wanted to highlight some of the key things I did learn, and how it has changed my perspective on things.

A small caveat before I begin: psychology is an incredibly broad discipline with several different methodologies and domains. The point of this post is not to distill all of it, or even my own specialisation which was in cognitive neuroscience. Instead I am attempting to focus on some of the things that I encountered already as an undergraduate and which have stayed with me since.

One of the top results when you type ‘psychologist’ onto Unsplash…These books were not on my reading list (though am sure the syllabus can vary from university to university). Photo by Carl Cervantes on Unsplash

Statistics over anecdotal evidence

Statistics might not be the first thing that comes to mind when thinking of psychology, and yet it’s central for being able to carry out and assess research.

Let’s imagine that you are running an experiment and your participant shows up. They complete the task at hand but unbeknownst to you they were hungover from the night before, so they performed worse than they would have otherwise. Another participant shows up, who happens to be particularly alert having had a cup of coffee. Another is particularly risk-averse and so performs the task at hand more carefully, which also influences their results. And so on and so on. Furthermore, behaviour tends to be influenced by several different factors, not just the presence of a hangover. The data are always messy, and you will need to use some statistics to get at any underlying patterns. There are some exceptions, but anecdotal evidence alone is not going to get you far.

If you base your conclusions of complicated phenomena on limited data, there’s a good chance that you might miss the overall trends. On the other hand, you still want to be careful on how you pool your data together (see Simpson’s paradox for an example).

Experimental design and the indirect nature of measurement

You are rarely measuring directly what you are interested in, and so you have to select your proxy variables with care. For instance you can assess short-term memory by testing recall of number sequences, attention through reaction time, and personality via questionnaires. All of these relate to the phenomenon of interest, but also don’t tell you the full story. You have to also consider alternative explanations for your results. It’s one of the reasons why journal papers have multiple experiments in them, as researchers attempt to rule out alternative explanations for their results.

At least that’s what people should be doing. If you want to read a great account of shoddy and unethical methodologies that are partly to blame for the replication crisis in psychology and beyond, then I strongly recommend the book Science Fictions or read this SMBC comic instead.

It’s an illusion: Your brain doesn’t work how you think it does

Studies on neurological patients have provided great insights on how the human mind works. There are split-brain patients where the left hand doesn’t know what the right hand is doing, ‘blind’ people who can still reach out to objects, and patient HM who could not form new memories, but could still learn new motor tasks. More than anything, they demonstrate the modularity in the brain, and how even though we perceive things as a unitary whole, that is not really the case.

This is corroborated by studies in non-clinical populations. A lot of research consists of manipulating experimental variables and seeing if this leads your brain astray: you infer how it works by seeing when and how it doesn’t. Visual illusions are a great example of this, as the way they trick you tells you how perception works. For instance, did you realise that you cannot see the blind spot in your eye? A nice demonstration of how what you perceive is just your brain filling in the gaps! For a more immediate demonstration, here is one of my favourite visual illusions I’ve stumbled upon on Twitter.

Perception is not the only part of cognition where you see the brain making inferences and crafting its own reality. In fact, it’s a running theme. For instance, instead of being creatures of pure reason people use heuristics in their decision making. Another example are memories, which turn out to be a lot less reliable than people think. Not only do they shift with time, one can even induce false ones. The unreliability of eyewitness testimonies has stuck with me since my first semester as an undergraduate…I think!

To err is human; to forgive divine

Some might conclude that since people are so fallible and irrational we might as well throw in the towel and call it a day. However my biggest takeaway is how amazingly well our minds work regardless of the shortcuts they take.

In the 1960s Marvin Minsky gave some undergraduates the task to figure out vision over the summer. Since we do it all the time, how hard could it be? As this XKCD comic neatly illustrates, the answer is “very.” Our minds are victims of their own success and make it look easy, when what they do is mind-bogglingly complicated (if you pardon the expression).

I might take what people say with a larger pinch of salt, but I am also more willing to cut them (and myself) a bit more slack.

Remote working: Shortening the distance

As I am writing this it is late 2020, and circumstances have forced thousands companies to adopt remote working, even if temporarily. By definition this is a more isolated setup than the alternative, and not everyone’s cup of tea. However several companies, such as Github and Zapier, have been operating this way for years, so it is clearly a viable option.

So, how do you manage the transition to such a mode of working? I have been working at a company that shifted to a remote-first setting two years ago, and I was recently on the Italian-speaking podcast SpaghettiCode to talk about the shift. We covered a lot of topics, including the importance of trust, good documentation, and the challenges of working asynchronously. We also discussed how to maintain the relationships with colleagues and how to foster what the folk at Balsamiq call “teaminess”.

I am a people person, so for me this last point is particularly important and something I will expand on.

Be proactive

Let’s start with something obvious: communication is important, and it helps to be proactive in reaching out to others. You do not always have to wait until the next scheduled meeting to contact someone, especially when it comes to people with whom you do not interact often. Perhaps you are curious about how often the Success team encounters a certain issue, or want to congratulate a Sales Rep for closing a big deal. My advise is to just reach out, they will usually appreciate the interest. Just don’t expect or demand an instant response, as people have their own thing going on, especially when you don’t share a timezone.

A corollary to this is that when you are on the receiving end, try to be responsive. No-one likes to feel like they are talking into the void, so an acknowledgement of the message goes a long way, even if you might be too busy or unable to help them.

“No-one even gave my message a thumbs up…” Photo by Stefan Spassov on Unsplash

Opportunities for ‘keeping it real(time)’

I have worked with a team where our scheduled stand-up calls evolved into a more versatile time slot which we would adapt to the needs of the day. These would always include a stand-up component, consisting of the relatively standard discussion of current tickets in different stages of a Kanban board, while also providing us with an opportunity for shooting the breeze or a starting point for deeper dives into current work challenges.

The main benefit of this arrangement was that it provided us with a regular opportunity for interaction which was flexible enough to allow us to build our relationships beyond just the status of tickets. That being said views and desires revolving around stand-ups (and meetings) vary a lot, so you have to see what works in your setting. This takes us to my last point.

Variety is the spice of life

Ultimately, you get to know your peers by interacting with them in different ways and in different formats. Just as with the stand-up example above, the key is to create opportunities for interaction, even through something like shared weekly virtual coffee sessions. People are different so it’s good if opportunities vary in type and frequency.

I’ve heard of teams playing games such as Among Us, or doing virtual coffee tastings. At ChartMogul, we even ran a company-wide hackathon. Online tools make these things a lot easier than they used to be. I recently attended Mind the Product’s online conference, and perhaps the best thing about that was attending it and discussing it with colleagues, some of whom I’ve never met in person!

A nifty tool for Slack that deserves a shoutout is donut.com, which pairs employees for one-off one-on-one sessions. It might feel a little corny at the start, but it can really help removing the friction in getting to know and interacting with your colleagues.

Benefits beyond the individual

Building relationships with your colleagues is not only good for scratching a socialising itch. It can make people more productive as they support each other at work and share information across the organisation, which offsets the disadvantages of getting distracted more often. Apart from making teams stronger, it also decreases staff turnover, and helps maintain motivation levels. Therefore staff interactions are not just something fun, but something worth investing in. This includes occasionally meeting in person (circumstances allowing). Whether you do regular company off-sites, meet at conferences, or organise smaller team re-unions, all of these all go a long way in breaking down barriers. They will also make everything listed above a lot easier.

Need for speeds

“If you want to improve the UX of your product, make it 10% faster!”

I heard this years ago in an episode of “What is wrong with UX” and I still ponder upon it and think “Yeah, that is pretty spot on, those two self-described ‘old ladies who yell at each other’ really hit the nail on the head there.”

The original context for that quote was that product improvements do not need to always come via a new feature or by overhauling the UI. Sure, these can help (although sometimes they can cause more harm than good), but to improve a software product making it faster can make all the difference. No-one has complained that a website loaded too fast, and judging from this post by the Nielsen Norman group, people are unlikely to do so any time soon.

Speed (and performance in general) is easily given short shrift, but this may be partially accounted for the fact that, just like the term ‘user experience’, it is not a single item, but a multi-faceted property of the product itself. Let us unpack this a little bit.

Your product’s many speeds

If you wanted to increase how fast your product is, you quickly realise that there are several separate things you can try to tackle. In the case of a web application, it might be how quickly a web page loads, how fast a user can download or upload files, or how quickly they can query their data. Furthermore, each of these examples can be broken down into even more items.

For instance, there are many potential ways in which you can attempt to improve the performance of a particularly slow query. These can range from potential optimisations of the query itself, changing the underlying data model, updating your infrastructure, and more. Each of these can improve how fast a user gets their results, but the other factors will still present their own bottlenecks. You can keep trying to optimise said query, but if your infrastructure is creaking, then that will only take you so far.

So if performance can be affected by many things, how do you know whether you are improving ‘it’?

The prevention paradox and the importance of measuring

Back in the day, I spent a summer as a cleaner. One thing you learn pretty quickly is that people notice a lot more any spots you may have missed than any you did not. A performant product is pretty much the same. People can easily take it for granted but will notice very quickly when expectations are unmet. A lot of work can go into maintaining functionality, but fanfare rarely follows when this is done successfully. For a more poignant example of work behind the scenes being taken for granted, one can look at the attitude of some when the Y2K bug failed to wreak havoc everywhere.

I am not saying that cleaning hotel rooms is harder than preventing the Y2K bug from causing havoc, but then again rooms like this don’t just happen…!
Frank Schwichtenberg / CC BY-SA

This is unfortunate, as it means that it can be hard to communicate why certain tasks such as infrastructure improvements are necessary, especially in a preventive capacity. Even you and your team might not be aware of some of the value you have provided.

As an example, a team I have worked with implemented a change that better distributed the load our application was under. We were pleased with the results, but did not think about it further. A few months later we inadvertently undid this change. We noticed the repercussions very quickly as the performance promptly started to degrade across the application.

It turned out that this change we had made had not just mitigated an occasional issue, it had also allowed us to onboard larger customers who made more use of our application. It wasn’t the best way for us to realise the value of our previous work, but it was effective.

We found monitoring our application before and after a proposed improvement as the next best thing. Investing time in monitoring and logging proved invaluable in assessing whether our improvements have been successful or not, and in making sure that new product developments did not compromise the performance of the application. Moreover, it also helped communicate our efforts to the rest of the company.

This last point should not be underestimated. If you want to have the support of stakeholders across your company to make crucial-but-not-always-shiny improvements, then you are going to need something more than “We did lots of refactoring which has made the product a lot better in ways you cannot see. Trust me.”

Also, the engineers have worked hard on these improvements, you should help make sure their hard work is recognised!

Summa summarum

One can say that performance has a branding problem. It is easily taken for granted or become an afterthought. Yet, like delayed public transport, everyone complains when things slow down.

Yes, you should strive to have a product that is so fast and reliable that it disappears into the background, but you should know also that it is going to take several steps to get there. Therefore, you might as well communicate and celebrate the milestones along the way.

Transitioning from academia to product management

The world of academia can be great for several reasons. There are many benefits, from the loftier goals of advancing human knowledge to the variety that academic roles provide. There is of course also academia’s much vaunted flexibility.

While this is all arguably true, I found myself comparing it to a partner you like but not love: perhaps one should not dedicate one’s own life to it, and part as friends. Not only that, but academia does not have a monopoly on the above perks.

Despite the glut of PhD students compared to the number of temporary academic positions (let alone permanent positions), university departments tend not to emphasise alternative career paths. And yet PhDs do end up elsewhere, sometimes carrying out research in industry (as strongly recommended by this post), but also to more disparate careers. While I do not want to knock academia as I see its value, what I do want is to offer an account of how my own transition took place, and how my research background has helped me. I will also cover some things I have had to unlearn. In doing so I hope to get some current PhD students thinking about their options, and realising that they are not as limited as they might think. 

My own transition from academia

My stint in academia was not a particularly long one, consisting primarily of a position as a research assistant and then a PhD itself. My research was in experimental cognitive neuroscience, with a focus on how visual attention and movement planning and execution are linked. In practice this meant running eye- and motion-tracking experiments in basements with no windows. On the ‘bright’ side, I was at the University of Edinburgh in Scotland, so I was not missing out on much sun.

Basically a lot of this, but with the lights switched off.

Fun as it was (and it actually was), towards the end of my PhD I did decide it was time to try something different, not least because of the career prospects mentioned above. If I could directly leverage what I had done, even better. I did some research on my own, spoke with different people, and I thought the world of UX research seemed like an appealing next step. I was also lucky enough to get a taste of it through an internship program specifically designed for PhD students run by the university.

Although user experience (UX) permeates everything, I was particularly interested in it in the context of software development, and thought it would be good to see how the sausage (i.e. software) is made. With this in mind, during the last months of my PhD, and following the internship, I got a part-time job as a QA tester in a software company.

Turns out making sausages and software may share more similarities than one might initially have thought.

This turned out to be very serendipitous. Due to the company’s relatively small size, it was a great environment to learn and be involved in different aspects of the business. Not only that, but there was also room for growth, and I was offered to be the company’s first product manager. Despite initially not being quite aware of what the role would entail and the steep learning curve that followed, it’s the role I am still in five years later, and I consider myself more than a little lucky.

But lucky only to a point. I am sure that if I had not ended up in product management, I would have found another role which also would have been engaging and rewarding (though don’t ask me what it would have been). There are more opportunities out there than one can possibly be aware of, but you need to expose yourself to the different options first. And the skills and perspective which you may take for granted will still be relevant and help you.

About those transferable skills

Before embarking on the final section of this post, a small disclaimer. The next section does not imply that those from a non-academic background do not have the upcoming skills, and on the contrary, some will have these skills in spades. The point is to emphasise that as a PhD candidate so do you. Furthermore, this is of course based on my own experience, and other experiences vary.

With that out of the way, below are some things from my PhD which I think have been of use, as well as some words of warning (WoWs).

Writing

They say practice makes perfect (although I prefer to say ‘better’), and if there is something you will get to practice often, it is writing. You will have covered a range of formats and word counts, and written a varying number of papers, proposals, and at least one thesis. Not only that, but you will also be in the habit of adding references to your claims and statements. Revisiting your old experiments for which you had already clearly written up the method and analysis, versus the experiments for which you had not, serves as a powerful lesson. Writing up my first experiment in my thesis caused me a lot more work than the ones that followed.

As a product manager you will have to write your fair share of documentation, design proposals, release notes and lots and lots of tickets. Just as for your doctoral work, the time will come that you documented how a particular feature works (or at least was supposed to work). What is even better, this time the rest of your team can benefit from your documentation as well!

WoW: Documentation and tickets have a much more limited shelf-life than journal articles, so adjust the amount of time spent writing accordingly. Let’s face it, the documentation for a new feature is out of date as soon as the first bug is found, so accept that what you write will be useful, but will also be out of date soon after it’s written. Also, though references are useful, use them more sparingly.

Trust me, outside academia this ratio of references to text does not go down as well as you might think.

Tutoring

Even though marking always takes longer than guidelines recommend, tutoring was a great experience. It helps you develop skills in text editing, giving helpful feedback, running tutorial sessions, but also actually teaching students the subject matter.

As a product manager, I have found these useful in running and facilitating knowledge sharing sessions, running retrospectives, and collaborating on briefs.

WoW: Whereas the course material will be the same for a particular year group, all of whom have the same aim to pass their exams (with admittedly a wide range of dedication), you will have to do more tailoring of the material for the different stakeholders, who will have overlapping but diverging interests.

Analytical skills

Trying to define what a analytical skills are could probably take multiple posts, l will not dwell on it too much here. However you will be keen to understand ‘what is going on’, and you will use data to find out (be it qualitative, quantitative or both).

One of the key responsibilities of a product manager is to drive and own the prioritisation of the issues you and your team are trying to address. Understanding the problems your users are experiencing and why are a key requirement for doing so. Note that this should ideally still be a collaborative process, and a product manager should not be the sole determinant of what gets worked on next (see here for a great and nuanced post on the subject). A data-driven approach will help you leave your ego at the door and find the best solutions as a team.

WoW: Do not wait to understand the problem perfectly before implementing a solution, or knowing exactly which issue to prioritise next. The data one has at hand will typically not be of the level that you would have for a peer reviewed journal, but do not wait for that to be the case to take action (feel free to insert a zinger about the replication crisis here). You will need to move faster than you might be used to.

Working independently

As a PhD student you do your own work. You study the literature, design, build, and run the experiments, do the data analysis and put it all together. There is of course some support and assistance, but you own all of the work. You will not have done most of this before, so you are used to learning by doing, and lots and lots of Google-ing. You will have practice in learning new tools and techniques as you go, and will be relatively self-sufficient. This is grand, but I would argue that it is probably the transferable skill with possibly the highest transition cost.

WoW: After being self-reliant for so long, you may have forgotten what it is like to work with others. On the one hand you might be surprised at how much more you can achieve as a team, but on the other hand, it can take some getting used to having to rely on others for things, either because you do not have the required skills or the time. Although the team of people you work with will colour the experience a lot, with the right people it can be way more rewarding, and your own impact will be magnified by your colleagues.

Whatever it was your PhD was in

Whatever your topic of research, by the end of your thesis you will be among the most knowledgeable people on the subject, and even if you move to something very distinct, chances are that you will be able to draw parallels with what you may transition into. You will also have become experienced in whichever research methodology you used during your studies.

In my case, a background in studying how the brain perceives and responds to stimuli has been helpful when trying to implement more fluid user flows and intuitive user interfaces. What has been more useful is the exposure to experimental techniques and data analysis. You learn to try and listen to your users (while not asking too many leading questions), while also understanding that what users say and what users do are two overlapping but separate things. (In case you forget this, you will be reminded of this any time you build a feature for a prospect who requested it, but still ended up not buying the solution.) Running usability sessions and customer interviews is similar to booking participants and running one’s experiment.

As a final point, I currently work at a company that focuses on revenue analytics for SaaS companies, where having had to clean up and process my own data has been very helpful indeed. This was less pertinent for the preceding company, but you never know when a certain experience will become relevant!

WoW: At the end of the day, the work will be different and there will be plenty of new things to learn. You will have to apply your previous skills in a different way and to a different degree. Do not worry, they are good ones to build upon.

Tying it all together

The point of this post has neither been to dis academia or to praise PhDs. It is instead to point out that if you are in the process of completing a doctorate, you have more options than you may realise. And since the skills you have acquired will undoubtedly help you elsewhere, there is no need to stay in academia due to the sunk cost fallacy. You may stay or go – the choice is yours!