HiCellTek HiCellTek
Back to blog
MWC 2026AI-RANEricssonNokia

MWC 2026: Agentic AI Conquers the RAN, But the Field Still Has the Last Word

Analysis of AI-RAN announcements at MWC 2026: Ericsson, Nokia, NVIDIA demos, AI-RAN Alliance progress, and why field validation remains essential when AI optimizes the network.

Takwa Sebai
Takwa Sebai
Founder & CEO, HiCellTek
March 17, 2026 Β· 9 min read

Mobile World Congress 2026 in Barcelona marked the year that agentic AI in RAN operations moved from concept to deployment reality. Ericsson demonstrated autonomous RAN tuning across live 5G cells. Nokia showcased its MantaRay SON platform with real-time reinforcement learning. NVIDIA announced a $1 billion investment in Nokia’s AI-RAN infrastructure. The AI-RAN Alliance, now counting 80+ members, published its first interoperability framework.

The message from the exhibition floor was unambiguous: the Network Operations Center is becoming an AI-first environment. But beneath the keynotes lies a structural question that no vendor addressed directly: when AI optimizes the network from KPIs, who validates that the optimization actually improves the experience on the ground?


The AI-RAN landscape after MWC 2026

Key announcements and their significance

Vendor/EntityAnnouncementSignificance
EricssonEricsson Intelligent Automation Platform (EIAP) live demo: autonomous CIO (Cell Individual Offset) tuning across 200 cellsFirst public demo of closed-loop RAN parameter optimization without human approval
NokiaMantaRay SON with reinforcement learning; AI-driven energy savings modeAI agent adjusts capacity/coverage tradeoff in real time based on traffic prediction
NVIDIA$1B investment in Nokia AI-RAN; Aerial platform for GPU-accelerated RANPositions GPU infrastructure as essential RAN component, not just data center
AI-RAN AllianceInteroperability framework v1.0; 80+ membersFirst standardized approach to multi-vendor AI-RAN integration
TelstraReported AU$122M annual savings from AI-driven network operationsFirst major operator to publicly quantify AI-RAN ROI
SamsungAI-based beam management for 5G mmWaveAddresses the most challenging RF optimization problem in 5G
Rakuten SymphonySymworld AI orchestrator for Open RANAI orchestration across disaggregated RAN components

The numbers behind the hype

The data supporting AI-RAN adoption is now substantial enough to move beyond vendor marketing:

MetricValueSource
Telcos increasing AI budget89%TM Forum / Analysys Mason 2025
NOC operations AI-assisted (projected 2027)60%Ericsson Mobility Report
OPEX reduction from AI-RAN (observed range)15-30%Multiple operator reports
Telstra AI savings (annual)AU$122M ($79M USD)Telstra FY2025 results
Energy savings from AI sleep modes10-20%Nokia, Ericsson field data
Mean time to resolve (MTTR) reduction40-60%Vodafone, Deutsche Telekom pilots
AI-RAN Alliance members80+AI-RAN Alliance

The 89% figure is particularly telling. When nine out of ten operators are increasing their AI/automation budget, the technology has crossed from experimentation to strategic priority. The Telstra AU$122M savings provide the financial proof point that CFOs require.


How AI-RAN actually works: the optimization loop

To understand why field validation remains essential, we need to examine exactly what AI-RAN optimizes and how.

The AI-RAN data flow

Network KPIs (OSS/BSS) β†’ AI Model β†’ Parameter Change β†’ Network Response β†’ KPI Update
     ↑                                                                          |
     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The AI agent operates on network-side KPIs: handover success rate, RRC connection establishment rate, PDSCH throughput, CQI distribution, PRB utilization, RACH attempts, paging success rate. These are aggregate metrics collected from the eNodeB/gNodeB and reported to the OSS.

What AI-RAN optimizes

Parameter CategoryExamplesAI Approach
MobilityCIO, A3 hysteresis, TTT (Time-to-Trigger)Reinforcement learning on handover KPIs
CapacityScheduler weights, MCS thresholds, CA activationPredictive load balancing
CoverageAntenna tilt (electrical), power allocationCoverage-capacity tradeoff optimization
EnergyCell sleep/wake cycles, MIMO layer reductionTraffic prediction + progressive shutdown
InterferenceICIC coordination, frequency refarmingGraph neural networks on interference maps

What AI-RAN does not see

Here is the critical gap. AI-RAN operates on aggregate network-side metrics. It does not have visibility into:

  • Individual UE experience. A cell may report 95% handover success rate while a specific device model fails handovers due to UE capability mismatch. The 5% failure is invisible in the aggregate.
  • Indoor propagation. Network KPIs are dominated by outdoor macro coverage. Indoor dead zones, DAS underperformance, and small cell integration issues are statistically diluted.
  • Protocol-level anomalies. An RRC reconfiguration that triggers an unnecessary measurement gap, a NAS rejection due to PLMN selection error, a VoLTE call setup that completes but with degraded codec negotiation. These are visible only at Layer 3 on the device side.
  • Real user QoE. MOS scores, video stall ratios, app launch times, and DNS resolution delays are experienced by the user, not the network.

Nokia + Infovista AFPV: the vendor admission that field matters

Perhaps the most revealing announcement at MWC 2026 was not about AI at all. Nokia and Infovista jointly presented their Automated Field Performance Validation (AFPV) solution, combining Nokia’s SON optimization with Infovista’s drive test automation.

The significance is in the subtext: Nokia, one of the two dominant AI-RAN vendors, publicly acknowledged that AI optimization requires field validation. If the network-side AI were sufficient, there would be no need for a field validation product.

The AFPV workflow confirms the pattern:

  1. AI-RAN optimizes network parameters
  2. AFPV dispatches automated measurement (drive test vehicle or fixed probe)
  3. Field measurements are compared against expected improvement
  4. Discrepancies trigger AI model retraining or manual investigation

Step 3 is where the architecture reveals its dependency on field data. Without ground-truth measurements from the user’s perspective, the AI operates in a feedback loop that can converge on locally optimal but experientially suboptimal configurations.


The new role of field engineering: validating what AI optimizes

MWC 2026 did not eliminate field testing. It redefined its function. The field engineer’s role is evolving from routine measurement collection to intelligent validation and anomaly investigation.

Before AI-RAN (traditional workflow)

Schedule drive test β†’ Collect data β†’ Analyze in office β†’ Recommend changes β†’ Implement β†’ Re-test

Average cycle time: 2-4 weeks. Reactive. Campaign-based.

After AI-RAN (emerging workflow)

AI optimizes continuously β†’ Field validates on exception β†’ Engineer investigates anomalies β†’ Feeds corrections back to AI

Average cycle time for AI optimization: minutes to hours. Field validation: targeted, on-demand.

What field validation looks like in an AI-RAN world

Validation TypeTriggerRequired Capability
Post-optimization spot checkAI changes CIO/tilt in a clusterRSRP/RSRQ/SINR measurement at cell edge
Handover failure investigationAI detects persistent HO failures it cannot resolveLayer 3 RRC decode showing measurement reports and reconfiguration
VoLTE quality complaintAI shows normal KPIs but users report poor voiceVoLTE MOS measurement with codec and jitter analysis
Indoor coverage gapEnterprise client reports dead zonesWalk test with sub-room granularity and serving cell identification
New site integrationAI flags anomalous KPI behavior after new site activationMulti-layer measurement: RF + signaling + throughput + QoE
CA/MIMO validationAI activates new CA combo or MIMO layerUE capability verification + actual CA activation monitoring

In every case, the field tool must provide device-side, protocol-aware, GPS-correlated measurements that the network-side AI cannot generate on its own.


The 89% paradox: everyone invests in AI, few invest in validation

The TM Forum / Analysys Mason survey showing 89% of telcos increasing AI budget reveals a spending asymmetry. Operators are investing heavily in the optimization engine while underinvesting in the validation mechanism.

Consider the budget allocation pattern:

Investment AreaTypical Budget ShareTrend
AI/ML platforms (OSS/BSS)25-35% of optimization budgetIncreasing rapidly
Network-side analytics20-30%Stable to increasing
Drive test (traditional)15-20%Declining
Smartphone-based field tools2-5%Emerging
Field validation automation5-10%Increasing

The structural problem: a 15-30% OPEX reduction from AI-RAN is meaningful only if the optimization actually delivers the intended user experience improvement. Without field validation, the savings are theoretical. An AI that reduces handover failures by 20% in KPIs but introduces a systematic 3 dB SINR degradation at sector boundaries has optimized the metric while degrading the experience.


Case studies: where AI-RAN needed field correction

Case 1: CIO optimization creates ping-pong zone

An operator in Western Europe deployed AI-driven CIO optimization across a 500-cell cluster. Network KPIs showed a 12% improvement in handover success rate. Field measurement with Layer 3 decoding revealed that the AI had created a ping-pong zone between three cells at a highway interchange, where UEs were executing 8-12 handovers per minute. Each handover β€œsucceeded” (incrementing the success KPI), but user throughput dropped 60% due to continuous measurement gaps.

The field tool detected the pathology. The network KPI masked it.

Case 2: Energy savings mode drops VoLTE MOS

An Asian operator activated AI-driven cell sleep mode during low-traffic hours (2:00-6:00 AM). Energy savings were measured at 18%. Field validation at 3:00 AM showed that VoLTE calls in the sleep zone were being served by a distant macro cell with MOS scores of 2.1 (below acceptable threshold of 3.0). The AI had no VoLTE quality input in its optimization function and correctly optimized for energy while inadvertently degrading voice service.

The field MOS measurement revealed the tradeoff. The AI’s energy KPI showed only success.

Case 3: MIMO layer reduction misses enterprise building

An operator’s AI reduced MIMO layers from 4T4R to 2T2R on a sector facing a low-traffic residential area. The sector also served an enterprise office building 400 meters away. Indoor throughput in the building dropped from 120 Mbps to 45 Mbps. The building’s traffic was statistically insignificant in the sector’s aggregate PRB utilization, so the AI’s decision was rational from a network perspective.

Indoor walk test measurements quantified the impact. The sector KPIs showed no degradation.


The technology stack for AI-era field validation

Field tools in an AI-RAN environment need capabilities that go beyond traditional drive testing:

Minimum capability requirements

  1. Real-time Layer 3 decoding. RRC and NAS message parsing to identify signaling anomalies that aggregate KPIs miss. MeasurementReport, RRCReconfiguration, and handover command analysis.

  2. VoLTE/VoNR QoE measurement. Per-call MOS scoring with codec identification, jitter measurement, and RTCP analysis. The AI optimizes for call setup success; the field tool validates call quality.

  3. GPS-correlated multi-layer logging. Every RF measurement (RSRP, RSRQ, SINR), every signaling event, and every throughput sample must be geotagged with sub-10m accuracy for spatial analysis.

  4. UE capability reporting. Knowing what the device can do (supported CA combos, MIMO layers, NR bands) is essential to determine whether a performance limitation is network-side or device-side.

  5. Lightweight, rapid deployment. In an AI-RAN world, field validation is triggered on exception, not scheduled quarterly. The tool must be deployable in minutes, not hours. This inherently favors smartphone-based solutions over vehicle-mounted hardware.

  6. Export compatibility. Measurements must integrate with OSS/BSS analytics platforms. Standard formats (CSV, KML, QMDL) plus API-based ingestion enable the feedback loop from field to AI model.


Conclusion: AI optimizes, the field validates

MWC 2026 made one thing definitively clear: AI-RAN is not the future of network optimization; it is the present. The 89% investment figure, the Telstra AU$122M savings, the 60% NOC automation projection confirm that the industry has committed.

But Nokia’s own AFPV announcement with Infovista confirms the corollary: AI optimization without field validation is an open loop. The network KPIs that feed the AI model are necessary but not sufficient. User experience happens at the device, in the building, on the street, and these remain measurable only by tools that operate where the user operates.

The field engineer of 2026 is not obsolete. The role has elevated from data collector to validation specialist, from routine campaigner to anomaly investigator. The tools must evolve accordingly: lighter, faster, protocol-aware, and always available in the engineer’s pocket.

The operators who will extract the most value from their AI-RAN investments are those who close the loop with systematic field validation, turning AI’s network-optimal decisions into genuinely user-optimal outcomes.

Share: LinkedIn X
Takwa Sebai
Takwa Sebai

Founder of HiCellTek. 15+ years in telecom, operator side, vendor side, field side. Building the field tool RF engineers deserve.

Ready for the field?

Request a personalized demo of HiCellTek β€” 2G/3G/4G/5G network diagnostics on Android.

Try our free telecom tools

TAC Lookup, IMEI Calculator, EARFCN Calculator, used by telecom engineers worldwide.

Try Free Tools

Get telecom engineering insights. No spam, ever.

Unsubscribe in one click. Data processed in the EU.