Director Reasoning Analysis
Pattern analysis of discretionary denial factors and predicted outcomes
Tutorial Mode
Loading analysis data...
These cards summarize an AI-driven analysis of all Director discretionary denial decisions. The model uses natural language processing to read petitioner and patent owner briefs, extract key arguments, and predict outcomes. "Decisions Analyzed" is the total decided cases. "Deny %" and "Refer %" show the outcome split. "Model Accuracy" reflects how often the AI's prediction matched the actual Director decision. "Pending Predictions" shows cases with briefs filed but no decision yet.
Argument Rankings
Using AI analysis of petitioner and patent owner briefs, this tab ranks the most common arguments by frequency. "Winning" arguments appeared in the prevailing party's brief. "Avg Weight" reflects how strongly the AI model associates the argument with the outcome. "Factor" categorizes arguments under standard discretionary denial factors (Fintiv, NHK-Spring, 325(d), etc.).
Winning Arguments
Petitioner
| # | Argument | Frequency | Avg Weight | Factor |
|---|
Patent Owner
| # | Argument | Frequency | Avg Weight | Factor |
|---|
Losing Arguments
Petitioner
| # | Argument | Frequency | Avg Weight | Factor |
|---|
Patent Owner
| # | Argument | Frequency | Avg Weight | Factor |
|---|
Factor Breakdown
Factors are the legal categories the Director considers. The bar chart shows the AI-derived win rate per factor — how often raising that factor correlated with a favorable outcome across all analyzed briefs. Cards below show detailed win/loss counts.
Win Rate by Factor
Pending Predictions
Proceedings where briefs have been filed but the Director hasn't decided yet. The AI model predicts the likely outcome (Deny or Refer) with a confidence score based on arguments extracted from the briefs.
Cases with briefs filed but no Director decision yet.
| Proceeding | Predicted Outcome | Confidence | Summary |
|---|
Model Accuracy
Evaluates the AI model's predictions against actual Director decisions. The table lists each decided case with actual vs. predicted outcome, confidence, and whether the prediction was correct. Incorrect predictions are highlighted in red.
| Proceeding | Actual | Predicted | Confidence | Match |
|---|