Just the other week, IDC star analyst Mike Fauscette wrote a post on a topic near and dear to my heart: What are the right measures of your R&D spend? I submit this is an extremely important topic for any software company and no less so for companies in the travel space. Even if you’re not in the business of selling software, technology is increasingly important for travel companies being able to increase bookings, margins and deliver a great experience to their customers.
The context of the post was a presentation he and several other analysts received at the Oracle OpenWorld financial analysts summit. Oracle was trying to demonstrate their commitment to innovation and keeping their technology at the forefront of the industry…and specifically ahead of SAP and HP.
Fauscette noted that instead of using a common metric – Percent of Revenue – Oracle used raw spend. While both are common metrics, I don’t think that either is an effective measure that companies should use in evaluating the effectiveness of their spend. Mike found both measures wanting as they don’t have any direct linkage to the performance of the underlying business:
“I could spend bunches of $$ and research and develop lots of things that nobody wanted and while my spend as a % of revenue would be very high (and probably increasing as my revenues fell through the floor, at least for a little while), I could never call that success.”
We need to come up with ways to measure the effectiveness of your R&D spend, not just the amount. Specifically, I’d like to help you better pinpoint whether the activities you’re pursuing are helping meet the objectives of the your business or not..
What I’m going to do is talk a little about types of R&D and then discuss what metrics you ought to be using to evaluate what you spent on them.
Why Measure R&D Spend at All?
Good question. If you’re a public company it may be a reporting requirement or a common metric that financial analysts use to forecast your stock price and set their ratings.
Some may say that it’s not even worth doing or measure deeply. They’ll say it’s hard to do. They may use the old line about measuring advertising expenditure: “Half of our budget is working great. I just don’t know which half.”
Or they may just say that they’re staying within their budget, so leave them alone. This is the single most important reason why I think that Percent of Revenue is the worst possible metric. Many companies “set” their budgets based on a percent of revenue. No other rationale. Now I’ll say this — % of revenue is easy to do and easy to measure, but it doesn’t tell you anything. Helluva way to run a railroad.
But you’re not one of “them”, right? You’re smarter than that.
Big R, Little r
As I stated before, the metrics you use must support the business objectives you’re trying to achieve. And so you must first understand how your R&D expenditures support those aims.
R&D is a term that is often misused and misunderstood. In the classical sense Research (what I call “Big R”) is an effort to explore and create advanced technology which may or may not have a direct impact on today’s business, while Development is the industrialization of new technology into products for sale. However many companies mistakenly conflate the two terms to mean the same thing. Thus when many companies refer to R&D, they’re talking mostly about development activities which I’d call “little r”.
Similarly, many companies misuse the term innovation. Clayton Christensen segments innovation into “disruptive” and “incremental”. Disruptive innovations alter the status quo in the industry – think the iPod, the iPhone, Software-as-a-Service (e.g. Salesforce.com) and Cloud Computing. Incremental innovations are just what they sound like…they move the ball forward, but not dramatically (e.g. Microsoft Office 2010).
The truth is that most companies spend the majority of their resources on bug fixes and feature enhancements, simply trying to hold on to customers and revenues via a traditional upgrade cycle, while trying to convince others (and maybe themselves) that the new versions incorporate many innovations (“New and Improved!”, “Your shirts will be 10% whiter!”). But in most cases these are merely features masquerading as innovation.
How much you spend on Big R v. Little R, or Disruptive v. Incremental innovation are strategic decisions which you must make first.
And there aren’t any hard and fast rules of how much you should be spending, both in the aggregate or on specific products. So much of that depends on:
- Organizational Maturity : e.g. startups should spend much higher proportionally to revenues than an established company)
- Scale: You can’t simply benchmark your % of revenues against Oracle if you’re a $100M company. You may want to compete with larger companies in the marketplace, but don’t enjoy the economies of scale that your competitors may have. So don’t try to benchmark blindly against them.
- Business models: This is the “apples to oranges” discussion. Different companies have different revenues. A company that’s pins growth on new license sales should look at investment rates differently than a company that’s dependant on software maintenance. And even different still are long tail revenue-based companies, primarily SaaS companies, who use a subscription or usage based model.
But once you have your strategies and objectives in place – and it’s critical that the objectives are tied to achieving over-arching business goals, not merely pursuing technology for technology’s sake – it’s important to measure the progress you’re making, which leads us to our last section.
What Are the Right Metrics?
There are of course many metrics which can be used in evaluating the effectiveness of your R&D expenditures. Let me name a few, some of which I’ll debunk, others I’ll suggest you add to your list if you don’t already use them:
- Often Used, Marginally Valuable
- % of revenue: As noted at the top, not really valuable in any way other than a gross and inaccurate way to compare one company’s spend versus another. Or simply a way to build a top-down budget.
- # of patents: Another often used metric, yet mostly directional in value. Many companies use this metric to try to gauge how “innovative” they are. But the question is really how many of these patents actually impact the business. Do they help drive revenue or control costs? A patent, or any new feature, that isn’t monetized doesn’t have any intrinsic value and falls into the category of an invention (cool new thing) rather than an innovation (cool new thing that customers want and are willing to pay for).
- Revenue- and Margin-based. This is where it actually gets interesting. Are the fruits of your labor actually improving the health of the business?
- Revenue: Pretty basic. Are they going up, down or are they stagnant. If it’s either of the latter two it means that you’re either not spending your resources on building products or services that meet your target customers needs.
- Vitality Index: Revenue is a very gross measure and there are many factors that impact it beyond R&D spend, making it less valuable. So let me introduce you to a concept you likely haven’t heard before, the Vitality Index (VI). VI is a measure of how much of your revenues are driven by products or services introduced in the past year (which are more likely a product of your current R&D spend). The higher your VI score, the greater the direct impact your R&D is having on business growth. The other benefit of this measure is that new products generally return higher margins than older products, so the higher the VI, the better the long term profit prospects of the company.
- Customer Retention/Churn Rate: This is extremely important as it’s far more costly to acquire a new customer than keep one. It’s also more indicative of energy spent on bug fixes and new feature introduction than disruptive innovations.
- Cost of Rework as a % of Total Budget: This is a great one because it highlights wasted energy. By definition this activity adds no value to the organization. It may help reduce attrition from angry customers, but it will not add a single customer to the business. To expect that rework should be zero is not reasonable, but like golf, the lower your score, the better. So watch this for trends and use it to identify inefficiencies in your development organization.
- Defect Injection Rate: The number of total known defects discovered during a product development cycle. This is the flip side to re-work as each of the defects ought to be fixed, although many are often not because they don’t rise to a level of importance (i.e. impact on sales) that merits the effort. But it is an important indicator of the effectiveness of your engineering effectiveness and is what generates the high cost and wasted effort of re-work, as noted above. Then there’s the matter of where those defects came from (bad requirements? bad coding?), but that’s a whole ‘nother post.
- Defect Leakage: Worse than the number of defects that you find, is the number of defects your customers find. That is as long as they are still customers. If this happens to frequently you can expect real (negative) impacts to customer satisfaction, customer retention and your corporate reputation as a reliable provider of technology.
- Variance to Budget by Product/Initiative: Self explanatory, but it’s important to look at the performance at the detail level rather than in the aggregate. It will help you identify underperforming teams.
- Variance to Release Schedule: Extremely important as missed release dates provide a black eye for the organization and represent lost revenue opportunities that can’t be recovered. It’s not a strict financial measure but has a direct financial impact on the company.
What’s your POV? Are you using these metrics? Do you have others that you’d like to share? Will you do anything different tomorrow than you did today?
Ness Software Product Labs has a strategic consulting practice that helps organizations evaluate the effectiveness of their R&D operations, both by helping establish a structured metrics program and comparing current processes to industry best practices. The final result is the creation of an actionable plan to enhance software engineering and testing practices tied to expected results.
NB: Hat tip to Dr. Jerry Smith, a former colleague who helped me develop some of my thoughts around R&D metrics and introduced me to the concept of the Vitality Index.