Seven Years of the Appeals Modernization Act: What the Outcome Data Tell Us and What They Don’t

CCK Law: Our Vital Role in Veterans Law
The Veterans Appeals Improvement and Modernization Act of 2017 promised veterans something the Legacy system had not delivered: choices.[1] Three lanes instead of one. Faster resolution. Less rework. Seven years of published data let us test how the architecture has performed under conditions nobody designing it could fully have anticipated.[2]
Two of those conditions matter most. The PACT Act, in August 2022, added millions of toxic exposure claims to a system that was already scaling. And the rule that has shaped veterans benefits adjudication for decades remained in place: under the Veterans Judicial Review Act, attorneys generally cannot enter the process until after the initial decision and the factual record is closed.[3] The initial claim is developed without legal strategy. The corrective work happens downstream. The supplemental claim lane absorbs it.
That structural feature shapes most of what follows. Supplemental claims have grown from 50,102 in FY 2019 to 685,669 in FY 2025, with roughly half producing at least partial relief. The Board has missed its timeliness targets across all three dockets, while closing the gap year over year.[4] The CAVC returns approximately 80% of the Board decisions it reviews, in line with longstanding patterns.[5] Higher-Level Review grant rates remain the lowest of any lane — predictably, given the closed record. The hearing docket’s apparent outcome advantage is better explained by case selection than by anything that happens at the hearing. And the variable that would tell us the most about system equity — whether representation status changes outcomes — is not yet publicly reported.
What follows is an attempt to read the data seriously: what the numbers show, what they do not, and what practitioners should do with them. None of this is a critique of the people inside VA doing the work. It is an analysis of the system within which they do it.
The views expressed are the author’s own. All public data are drawn from VA reports available at benefits.va.gov/REPORTS/ama/. Practitioner observations are informed by CCK’s institutional case data, which is not publicly available and not representative of the VA system as a whole.
I. The Correction Function
A. Supplemental Claims
The supplemental claim lane was designed to receive new and relevant evidence after an unfavorable decision.[6] The duty to assist applies. VA set a timeliness target of 125 days.
Table 1. Supplemental Claim Volume and Grant Rates, FY 2019–FY 2025
| FY | Total Claims | Total Grants | Grant Rate | 125-Day Compliance |
| 2019 | 50,102 | 17,489 | 34.9% | — |
| 2020 | 236,808 | 94,778 | 40.0% | — |
| 2021 | 236,565 | 104,493 | 44.2% | — |
| 2022 | 278,309 | 130,185 | 46.8% | — |
| 2023 | 395,649 | 212,677 | 53.8% | 67.8% |
| 2024 | 539,063 | 278,011 | 51.6% | 47.7% |
| 2025 | 685,669 | 330,209 | 48.2% | 69.7% |
Source: VA AMA Metrics Reports, benefits.va.gov/REPORTS/ama/.
Volume has grown nearly fourteenfold in six years. Most of that growth followed the PACT Act, as denied or underrated toxic exposure claims entered the pipeline.[7] The grant rate climbed from 34.9% to a peak of 53.8% in FY 2023, then moderated as volume continued to rise. Timeliness compliance dipped to 47.7% in FY 2024 — a year of historic intake — before recovering to 69.7% in FY 2025.
The grant rate is the indicator that requires interpretation. When roughly half of supplemental claims succeed, the explanation is not that initial decisions were wrong on a wholesale basis. It is that the front end of the system processes an enormous volume of claims under a rule structure that withholds attorney assistance until after the initial decision. Ridgway identified this dynamic as a predictable consequence of the VJRA.[8] Initial development lacks legal strategy. Correction happens downstream. The supplemental claim lane is doing the corrective work the system was always going to push to it.
The VA OIG identified a related measurement question.[9] When a Higher-Level Review identifies a duty-to-assist error, VA opens a new supplemental claim to correct it. Each component carries its own 125-day clock. The veteran experiences a single wait. VA’s public metrics report each component separately rather than the combined duration. The OIG’s recommendation that combined wait times be reported reflects healthy internal oversight. Implementing it would clarify a real measurement gap.
B. Higher-Level Review
Higher-Level Review is a closed-record, de novo review by a more senior adjudicator. No new evidence. No duty to assist. An optional informal conference lets the claimant point the reviewer to a specific error. VA’s target: 125 days.
Table 2. Higher-Level Review Volume and Grant Rates, FY 2019–FY 2025
| FY | Total Claims | Total Grants | Grant Rate | 125-Day Compliance |
| 2019 | 16,512 | 2,388 | 14.5% | — |
| 2020 | 60,005 | 9,175 | 15.3% | — |
| 2021 | 115,835 | 20,722 | 17.9% | — |
| 2022 | 117,740 | 21,964 | 18.7% | — |
| 2023 | 141,501 | 28,404 | 20.1% | 98.5% |
| 2024 | 217,781 | 48,896 | 22.5% | 72.3% |
| 2025 | 283,975 | 68,853 | 24.2% | 97.8% |
Source: VA AMA Metrics Reports, benefits.va.gov/REPORTS/ama/.
The grant rate has climbed steadily from 14.5% to 24.2%. The trajectory is consistent with reviewers becoming more willing over time to exercise difference-of-opinion authority — a maturation that is what the lane was designed to produce. The rate sits roughly half a step below the supplemental claim lane, which is structurally predictable: a closed record cannot fix evidentiary gaps.
Two features the aggregate obscures matter for practitioners. First, when an HLR reviewer identifies a duty-to-assist failure, the claim returns to the supplemental claim lane for correction rather than being granted outright. The published grant rate does not distinguish DTA returns from difference-of-opinion grants. A DTA return is not a denial — it preserves the original effective date and triggers further development — but it is also not the merits decision the veteran sought. Combined with the OIG’s measurement point above, it is a feature worth tracking.
Second, timeliness. FY 2025 compliance was 97.8%. That is the AMA’s clearest performance success. When the existing evidence supports the claim and the error was analytical, HLR delivers the answer faster than any other pathway in the system.
The practitioner rule of thumb: HLR when the evidence is already there, and the error was in the analysis. Supplemental claim when new evidence is needed. When in doubt, supplemental — the open record accommodates both problems.
II. The Hearing Docket Selection Effect
The Board offers three dockets under the AMA: direct review, evidence submission, and hearing. The Board issued 71,262 AMA decisions in FY 2024, more than doubling FY 2023 output.[10] AMA decisions now account for the majority of the Board’s workload. The docket-level differences in the data are where careful reading matters most.
A note on the “Other” column. It captures withdrawals, dismissals, and administrative closures. On the hearing docket, “Other” runs at 21% — substantially above the 11% rate on direct review. Many are hearing cancellations or rescheduling-driven dismissals. Hearing-docket outcome percentages are calculated against a denominator that excludes a large share of cases that never reached a merits decision.
Table 3. Board Appeal Outcomes by Docket, FY 2025
| Docket | Allowed | Allowed w/ Remand | Remand | Denied | Other |
| Direct Review | 26% | 8% | 31% | 24% | 11% |
| Evidence Sub. | 29% | 12% | 29% | 17% | 14% |
| Hearing | 28% | 12% | 25% | 13% | 21% |
| Total | 27% | 10% | 29% | 20% | 14% |
Source: VA AMA Metrics Reports, FY 2025 data.
Read at face value, the hearing docket numbers suggest hearings produce better outcomes. Combined allowance of about 40%. Denial rate of 13%, nearly half the direct-review rate. The lowest remand rate among the three dockets.
That reading does not survive examination of who chooses the hearing docket. Veterans and representatives who select hearings have, on average, stronger cases, more developed evidence, and a higher likelihood of being represented by counsel. The favorable outcome profile reflects that selection at least as much as anything that happens at the hearing itself.
Hearings change adjudicator assessments in a genuine minority of cases. Most often, the testimony confirms what the written record already shows. The veteran’s account tracks the medical evidence. The representative’s argument tracks the brief. When a hearing does move a decision, it is usually because the veteran has described functional impact that the records did not capture — how a back condition affects dressing, driving, or sitting through a workday. Those details matter. A targeted lay statement or a medical opinion addressing functional impact can usually surface the same details, faster.
Speed is the other half of the calculation. The Board established timeliness targets of 365 days for direct review, 550 days for evidence submission, and 730 days for hearing.[11] Through Q3 FY 2024, average days to complete a hearing-docket decision was approximately 1,089; direct review averaged 866.[12] More recent data show meaningful improvement on the direct review docket, with the Board reporting greater consistency closer to its targets. The hearing docket remains the slowest pathway. For most claims, it adds years to an outcome that well-prepared written advocacy can achieve sooner.
The regulation itself signals the same point. Under 38 C.F.R. § 20.700(b), a hearing “will not normally be scheduled solely for the purpose of receiving argument by a representative. Such argument may be submitted in the form of a written brief.” The Board’s own rule distinguishes testimony (which requires a hearing) from argument (which does not). Most of what representatives do at Board hearings is argument.
The AMA adds a further consideration. The VLJ who conducts the hearing is not necessarily the VLJ who decides the case. The deciding judge reads the transcript. A transcript is, in effect, a brief the practitioner did not write. Pauses, redirects, the moments where the veteran trails off or contradicts an earlier statement — all preserved. A brief controls the narrative. A hearing surrenders that control. VLJs are scheduled for six to twelve hearings per day. By afternoon, fatigue is real, and the deciding VLJ is unlikely to remember the live event when the case file arrives weeks later. The transcript will be read the way a brief is read. Better to write the brief.
Bosely and I addressed the VLJ’s hearing duties in a Veterans Law Review article analyzing the Board’s obligations under Bryant v. Shinseki to explain issues and suggest evidence.[13] Those duties are real, and VLJs take them seriously. They do not, on their own, justify a 500-day wait premium for the average claim.
The practitioner default should be evidence submission or direct review. Reserve hearings for the minority of cases that genuinely require testimony — credibility issues, functional limitations that resist written description, accounts whose persuasive force depends on the veteran being heard. Those cases exist. They are a smaller fraction of the docket than current usage reflects. CCK’s institutional practice is to request hearings sparingly. The outcomes have not suffered.
III. The Remand Cycle
Across all three Board dockets, the combined remand rate (remand plus allowed-with-remand) runs between 37% and 39%. Even the hearing docket, which performs best on this measure, sends 37% of its cases back for additional development. Direct review sends 39%.
At the CAVC, the picture is more pronounced. GAO reported in November 2023 that the Court returned approximately 80% of appealed Board decisions between FY 2020 and FY 2022 because it found the Board’s reasoning inadequate.[14] Most returns took the form of Joint Motions for Remand, negotiated between the parties and reviewed by the Clerk. The Board’s FY 2024 annual report indicates that 8–9% of Board decisions are appealed to the CAVC annually.[15] The Board’s internal quality review examined 3,598 decisions in FY 2024 and identified 159 errors (69 Legacy, 90 AMA); approximately 14.6% involved failure to fully address all raised contentions and theories of entitlement.[16] A review of AMA remands in FY 2023 found that the majority related to examination and medical opinion requests, prompting the Board to form a Tiger Team to identify why specific exam-related issues were remanded at higher rates.[17] That kind of internal diagnostic work is the right response. Ridgway documented the CAVC’s remand-heavy disposition pattern in the first volume of the Veterans Law Review nearly two decades ago and asked why the rate was so high.[18] The rate has not changed materially since.
Each remand is a carrying cost. Months or years added to a case that was supposed to be resolved. The veteran does not experience a remand as a procedural formality. It is waiting for backpay and monthly compensation while the claim cycles through again. A system built on rework — across multiple lanes, with multiple clocks — cannot move faster simply by working harder.
The AMA introduced a structural wrinkle worth understanding. Under the Legacy system, the Board retained jurisdiction after remand; the case returned to the same panel. Under the AMA, a Board remand routes the claim to the supplemental claim lane, and the Board’s jurisdiction ends. If the veteran disagrees with the post-remand decision, a new Notice of Disagreement is required. The CAVC confirmed this design’s jurisdictional consequences in Cooper v. McDonough, holding that Board remands under the AMA remain non-final, non-appealable orders.[19] The practical result: a Board remand is no longer a waypoint within a single appeal. It is the starting point of a new one.
That design has a downstream measurement implication. Supplemental claim volume figures include an unknown number of cases that already went through the Board and were remanded. The system counts them as new supplemental claims. They are functionally re-entries. VA does not currently distinguish post-Board remand supplementals from initial filings in its public reporting. The distinction matters for understanding whether the lane’s growth reflects genuinely new evidence development, cyclical rework, or both. Adding it to the published metrics would clarify the picture.
Ridgway and Ames’s empirical work on reasons-or-bases remands adds a useful caution.[20] Their analysis showed that remands for inadequate Board reasoning often do not change outcomes — the Board reaches the same result with better-written reasons. A CAVC remand preserves the opportunity. It does not create the result. Practitioners should preserve every potential argument for CAVC review while counseling clients that a remand is a procedural reset, not a victory.
IV. What the Public Data Do Not Yet Show
A. Representation
Whether representation status changes outcomes — and whether attorney representation, VSO representation, and unrepresented filings produce materially different results — is the variable that would tell us the most about system equity. The data sit within VA’s systems, and answering the question well is not VA’s task alone. It is a research question with several natural collaborators: VA, NOVA, the major VSOs, the CAVC Bar Association, and academic institutions positioned to design the methodology. A jointly designed inquiry — VA producing the data, the practitioner and VSO communities helping to scope the questions, academic researchers handling the analysis — would produce information the current accreditation debate badly needs. That information would matter equally to VSO leadership, to private practitioners, and to VA itself; all three have an institutional interest in understanding what representation contributes. The question is collaborative, not adversarial.
B. Geography and Adjudicator Consistency
The AMA metrics are national aggregates. They are not currently disaggregated by Regional Office. GAO has flagged geographic inconsistency in VA disability decisions repeatedly: in 2002, 2005, 2010, and 2014.[21] The pattern has been consistent. VA’s 57 Regional Offices have produced meaningfully different outcomes for similarly situated veterans. GAO reported in 2005 that average disability compensation per veteran ranged from $6,710 in Ohio to $10,851 in New Mexico.[22] More recent RAND work documented disparities in grant rates that varied by Regional Office.[23] Some variation reflects differences in veteran populations and claim mix. The magnitude has long exceeded what those factors alone explain.
The AMA gives the question new weight. If supplemental claim grant rates vary by RO, that variation tells VA which offices are producing initial decisions that need correction at higher rates. If HLR grant rates vary, that tells VA which offices have reviewers more willing to exercise difference-of-opinion authority. The data exist. Publishing them would serve both internal quality control and public confidence.
The same gap exists at the Board. GAO’s 2023 report found that the Board does not currently assess decision-making consistency across VLJs systematically.[24] Practitioners and subject-matter experts told GAO that VLJ inconsistency is a challenge. The Board responded that variation is inherent in individualized adjudication, and there is something to that. Adjudicatory independence means different decision-makers weigh evidence differently. The question is whether the variation falls within a range that reflects legitimate judgment or whether some portion reflects inconsistency in how legal standards are applied. The answer requires study. Two decades after GAO first raised the question, it remains open.[25]
C. The Rework Loop
As discussed in Section III, the AMA’s post-remand jurisdictional design means supplemental claim volume figures are mixed with an unknown share of recycled cases, and VA does not distinguish post-Board remands from initial supplementals in its public reporting. Until it does, the supplemental claim lane’s growth trajectory cannot be cleanly interpreted. Disaggregation would resolve the ambiguity in either direction.
V. What to Do With the Numbers
A disclosure. The recommendations below are informed by my experience adjudicating appeals as a VLJ and by CCK’s institutional data across thousands of Board appeals. CCK is the largest veterans disability law firm in the United States. Its case data are not representative of the system as a whole. Its clients are represented; its cases are screened; its evidence development is more intensive than what most veterans experience. The patterns are nonetheless consistent with the public data. With that caveat:
If the error is evidentiary, use the supplemental claim lane. Match the new evidence to the denial reason. If the issue was nexus, get a medical opinion. If the issue was no current diagnosis, get a diagnosis. The 48% grant rate reflects a lane that works when the evidence is targeted.
If the error is analytical and the record is complete, consider HLR. The 24% grant rate is lower, but turnaround is dramatically faster (97.8% within 125 days in FY 2025). If HLR is unsuccessful, file a supplemental claim.
At the Board, default to evidence submission or direct review. The hearing docket’s lower denial rate is a selection effect, not a hearing effect. The evidence submission docket produces comparable allowance rates without the wait premium. Write the brief.
Use direct review for narrow legal questions on a complete record. Accept the higher denial rate (24%) only when the legal error is clear and well supported by existing case law.
Preserve arguments for CAVC review, but calibrate expectations. The 80% return rate means a Board decision is rarely the end of the process. Raise every potential error at the Board stage. Counsel clients that a remand is a reset, not a result.
Conclusion
The AMA was a genuine and ambitious reform. Reducing rework and accelerating resolution were the right goals. Seven years of data show the system has achieved both in places — HLR timeliness in particular, and meaningful grant rates across multiple lanes — while continuing to absorb the consequences of a structural feature it did not create. The VJRA’s rule that attorneys generally cannot enter the process until after the first decision pushes corrective work into the supplemental claim lane and, on appeal, into HLR and the Board. No statutory reform downstream of the initial decision can fully address an evidentiary record that legal counsel was not permitted to develop on the front end.
The next phase of reform should focus on three things. First, completing the data picture: publishing outcomes by Regional Office, by representation status, and by VLJ; distinguishing initial supplementals from post-Board remand supplementals; and reporting combined wait times when claims cycle through multiple components. Second, addressing the upstream accuracy question rather than continuing to add lanes. Third, recognizing that the CAVC will increasingly review AMA output at scale, with novel procedural questions — lane selection consequences, evidentiary window ambiguities, the deciding-VLJ assignment problem — that benefit from system-level understanding on the reviewing bench.[26]
The numbers are public. The analysis should be too.
Endnotes
[1]Veterans Appeals Improvement and Modernization Act of 2017, Pub. L. No. 115-55, 131 Stat. 1105 (codified as amended at 38 U.S.C. §§ 5104A–B, 7105).
[2]VA AMA Metrics Reports are published monthly and available at benefits.va.gov/REPORTS/ama/. Unless otherwise noted, all volume and outcome data in this article are drawn from these reports.
[3]See James D. Ridgway, The Veterans’ Judicial Review Act Twenty Years Later: Confronting the New Complexities of the Veterans Benefits System, 66 N.Y.U. Ann. Surv. Am. L. 251, 271 (2010) (discussing the constraint on attorney involvement until after agency proceedings conclude).
[4]Board of Veterans’ Appeals, Annual Report for Fiscal Year 2024, available at department.va.gov/board-of-veterans-appeals/annual-reports-to-congress/. The Board established timeliness targets of 365 days (direct review), 550 days (evidence submission), and 730 days (hearing). See Board of Veterans’ Appeals, Annual Report for Fiscal Year 2021.
[5]GAO, VA Disability Benefits: Board of Veterans’ Appeals Should Address Gaps in Its Quality Assurance Process, GAO-24-106156, at 17–19 (Nov. 2023).
[6]38 C.F.R. § 3.2501(a)(1) (2019).
[7]Sergeant First Class Heath Robinson Honoring Our Promise to Address Comprehensive Toxics Act of 2022 (PACT Act), Pub. L. No. 117-168, 136 Stat. 1759.
[8]See supra note 3.
[9]VA OIG, VA Developed Reporting Metrics for Appeals Modernization Act Decision Reviews but Could Be Clearer on Some Veterans’ Wait Times, Report No. 21-00993-225 (2022).
[10]See supra note 4. AMA decision output more than doubled from 32,661 in FY 2023 to 71,262 in FY 2024.
[11]See supra note 4 (citing FY 2021 Annual Report timeliness targets).
[12]See supra note 4 (reporting ADP and ADC trends by docket through Q3 FY 2024).
[13]Corey L. Bosely & Bradley W. Hennings, A Proposed Approach to the BVA’s Clarified Hearing Duties to Explain and Suggest Pursuant to Bryant v. Shinseki, 5 Vet. L. Rev. 164 (2013); see also Bryant v. Shinseki, 23 Vet. App. 488 (2010).
[14]See supra note 5, at 17–19.
[15]See supra note 4.
[16]See supra note 4. The Board’s internal quality review examined 3,598 decisions in FY 2024 and identified 159 errors (69 Legacy, 90 AMA). Approximately 14.6% of identified errors involved failure to fully address all raised contentions and theories of entitlement.
[17]See VA Congressionally Mandated Report, Appeals Modernization, Feb. 2024, at 12 (discussing the Tiger Team’s review of AMA remands in FY 2023, finding the majority related to examination and medical opinion requests).
[18]James D. Ridgway, Why So Many Remands?: A Comparative Analysis of Appellate Review by the United States Court of Appeals for Veterans Claims, 1 Vet. L. Rev. 99 (2009).
[19]Cooper v. McDonough, No. 23-5963 (Vet. App. Sept. 18, 2024) (dismissing appeal for lack of jurisdiction; holding Board remands under the AMA remain non-final, non-appealable orders consistent with pre-AMA precedent notwithstanding the Board’s loss of post-remand jurisdiction).
[20]James D. Ridgway & David S. Ames, Misunderstanding Chenery and the Problem of Reasons-or-Bases Review, 68 Syracuse L. Rev. 303 (2018).
[21]See GAO, Veterans’ Disability Benefits: Claims Processing Problems Persist and Are Growing, GAO-02-806 (July 2002); GAO, VA Disability Benefits: Board of Veterans’ Appeals Has Made Improvements in Quality Assurance, but Challenges Remain for VA in Assuring Consistency, GAO-05-655T (May 5, 2005); GAO, VA Disability Benefits: Improvements Could Further Enhance Quality Assurance and Consistency in Claims Processing, GAO-05-99 (Jan. 2005); GAO, VA Disability Benefits: Closer Oversight and Better Data Could Help Improve the Accuracy and Consistency of Decisions, GAO-10-213 (Feb. 2010).
[22]See GAO-05-655T, supra note 21, at 11 (reporting state-level variation in average disability compensation per veteran).
[23]See RAND Corp., Reducing Racial Disparities in VA Disability Compensation Decisions (2023) (documenting disparities in grant rates that varied by Regional Office).
[24]See supra note 5, at 22–24.
[25]See supra note 21 (GAO-05-655T) (noting that VA lacked systematic methods for ensuring consistency as early as 2005).
[26]See Bradley W. Hennings, Now More Than Ever: The Case for Specialized Nominees to the U.S. Court of Appeals for Veterans Claims, Ten Years On, ROA Law Rev. No. 26013 (Apr. 2026), available at roa.org; see also David J. Boelzner, Jennifer Rickman White & Bradley W. Hennings, Now is the Time: Experts vs. the Uninitiated as Future Nominees to the U.S. Court of Appeals for Veterans Claims, 25 Fed. Cir. B.J. 367 (2016).
About the Author
Share this Post
