Analytics describing what happens at the U.S. patent office have been on the market for almost a decade, most commonly in the form of examiner statistics. It’s no longer news that there is significant outcome variability at the USPTO, and that it’s largely driven by which patent examiner your application is assigned. The very same application could result in a patent after only one round of negotiation with Examiner A, but might end being abandoned after 6 rounds of negotiation with Examiner B. And even more frustrating: examiners within the same narrow technical area can have vastly different prosecution styles. Here’s just one example of two examiners in the same (very narrow) technology area, with approximately the same experience, who take very different approaches to their job.

Because of these glaring differences in examiner behavior, examiner statistics for the US patent office are now ubiquitous, with nearly every major US law firm subscribing to a service that provides them. Many US patent practitioners would not even consider responding to an office action without first reviewing some basic statistics about the examiner assigned to the case—allowance rate, relative favorability of an interview, etc. And increasingly corporations who file in the US are turning to examiner metrics to help set guidelines for prosecution strategy—when to appeal, when to consider abandonment, even when to file.

But are these metrics really fair? None of them account for the quality of the application the examiner received, so perhaps the discrepancies we see are just a reflection of variable quality of applications submitted. Or perhaps it’s just because of differences in the law for different technology areas. And anyway, we must be willing to accept some range of variability; after all, patent examiners are not robots. Is the variability really outside an acceptable range?

To answer these questions, first we have to talk about how we are measuring examiners. Then we have to examiner other variables that could legitimately be causing this variability – tech area, experience.

Some examiner metrics are more fair than others.

As a purveyor of examiner statistics myself, I obviously believe that they are fair and meaningful, but with the major caveat that it depends on *how* you are measuring examiner behavior. Some examiner analytics are unfair, and therefore misleading. But the right examiner statistics do accurately represent what’s happening, and what is likely to happen.

One very common example of examiner metrics that can be misleading are interview statistics. Many patent practitioners turn to examiner statistics to help them decide whether it makes sense to have an interview. For many examiners, just getting on the phone and talking to someone can significantly push prosecution forward. The majority of examiner data providers will provide you with two pieces of information to help make this decision: the allowance rate for applications that have had an interview, and the allowance rate for applications that didn’t.

But this ignores the obvious fact that an attorney is much more likely to have an interview in a case their client cares about. So naturally, applications having an interview are going to have an allowance rate. The real question is what happens immediately after the interview: is it an allowance? Another office action?

Another, even more problematic statistic is allowance rate itself. Traditionally defined as patented cases divided by patented and abandoned cases, this metric is practically useless for new examiners who haven’t had a chance to get any allowances or abandonments. It doesn’t account for the examiner’s pending docket, and it penalizes examiners for abandonments that may have simply been a business decision.

Instead of allowance rate, the patent data community is increasingly turning toward a new metric that is based on the ratio of office actions to allowances—across the examiner’s entire portfolio. The idea is to capture what an examiner does at work every day: how many rejections have they written, and how many allowances did they grant? This metric has a couple of names: Professor Shine Tu from the University of West Virginia Law has coined it “OGR” (office action to grant ratio), and LexisNexis PatentAdvisor® sells a version of this metric called “ETA” (examiner time allocation). Whatever its name, it is far more reliable than allowance rate for getting a general idea of what type of examiner you are working with.

Examiner metrics do reflect variability outside of what we expect.

As a purveyor of examiner statistics myself, I obviously believe that they are fair and meaningful, but with the major caveat that it depends on *how* you are measuring examiner behavior. Some examiner analytics are unfair, and therefore misleading. But the right examiner statistics do accurately represent what’s happening, and what is likely to happen.

One very common example of examiner metrics that can be misleading are interview statistics. Many patent practitioners turn to examiner statistics to help them decide whether it makes sense to have an interview. For many examiners, just getting on the phone and talking to someone can significantly push prosecution forward. The majority of examiner data providers will provide you with two pieces of information to help make this decision: the allowance rate for applications that have had an interview, and the allowance rate for applications that didn’t.

But this ignores the obvious fact that an attorney is much more likely to have an interview in a case their client cares about. So naturally, applications having an interview are going to have an allowance rate. The real question is what happens immediately after the interview: is it an allowance? Another office action?

Another, even more problematic statistic is allowance rate itself. Traditionally defined as patented cases divided by patented and abandoned cases, this metric is practically useless for new examiners who haven’t had a chance to get any allowances or abandonments. It doesn’t account for the examiner’s pending docket, and it penalizes examiners for abandonments that may have simply been a business decision.

Instead of allowance rate, the patent data community is increasingly turning toward a new metric that is based on the ratio of office actions to allowances—across the examiner’s entire portfolio. The idea is to capture what an examiner does at work every day: how many rejections have they written, and how many allowances did they grant? This metric has a couple of names: Professor Shine Tu from the University of West Virginia Law has coined it “OGR” (office action to grant ratio), and LexisNexis PatentAdvisor® sells a version of this metric called “ETA” (examiner time allocation). Whatever its name, it is far more reliable than allowance rate for getting a general idea of what type of examiner you are working with.

ETA is certainly correlated with examiner experience, but even there, you will find outliers. See, for example, this examiner with significant experience and a high ETA.

And finally, there are efficient and inefficient examiners in almost every art unit at the US patent office, debunking the theory that differences only exist because of technology.

So if you care at all about patent assets pending at the US patent office, it does make sense to investigate the examiner assigned to your case. Like any statistics, examiner analytics will never tell the whole story, but they do provide insights that can save you significant time and money.

LexisNexis PatentAdvisor®

Two-Day Trial

Get to know your examiner better with more context and a deeper understanding of your examiner’s behavior than ever available before.

With your free trial, you will gain instant access to:

Examiner Search allows you to search by examiner name for a filterable, examiner specific dashboard of patent analytics, including rejection specific statistics, appeal statistics, prosecution statistics, interview statistics, a backlog of RCEs and timeline.

QuickPair easily replaces the USPTO Public PAIR by providing the most robust application details anywhere, including examiner timeline, examiner allowance rate and the average time and number of office actions to allowance.

PatentAdvisor, the first-ever data-driven patent strategy tool, provides a systemic approach to crafting an effective prosecution strategy. Understand why certain patent applications take longer than others to reach allowance—then use that knowledge to devise better patent prosecution strategies.

Free Trial