How AI-Based News Platforms Are Shaping the Future of Journalism

How AI-Based News Platforms Are Shaping the Future of Journalism

Buckle up—AI-based news platforms aren’t just shifting the ground beneath journalism; they’re obliterating the old playbook and building something raw, relentless, and unpredictable in its place. In 2025, “news” isn’t a cozy newsroom drama—it’s an algorithmic arms race, a collision of speed, trust, and survival where every player risks obsolescence. If you thought the future of journalism was a slow drip, think again. With 32% of top US news sites now blocking OpenAI’s search crawler and 50% rejecting its training crawler, the industry is drawing battle lines over who owns the facts and who gets to rewrite the headlines (see Columbia Journalism Review, 2025). The rise of AI-powered news generators like newsnest.ai is forcing businesses and readers alike to rethink everything they thought they knew about news: speed, accuracy, bias, and even what counts as “truth.” This is your no-BS guide to the revolution, packed with brutal truths, hidden benefits, and hard-earned lessons from journalism’s wildest year yet.

When AI breaks the story: The revolution nobody saw coming

A newsroom without humans—fact or fiction?

Picture this: It’s 3:42 a.m. on a Tuesday. A global security incident erupts—no time to brief editors, no over-caffeinated reporter pounding keys. Instead, an AI-based news platform auto-generates and publishes a breaking alert before a single human journalist even blinks. This isn’t science fiction. In 2024, several major outlets experienced exactly this scenario, with platforms like newsnest.ai delivering credible initial reports while legacy agencies scrambled to catch up. According to the Reuters Institute, 2025, AI systems now break stories in near real-time, leveraging data feeds and advanced language models to push headlines faster than any news desk could dream.

Empty newsroom dominated by AI dashboards posting headlines, featuring glowing dashboards and breaking news alerts, futuristic AI newsroom 2025

For veteran journalists, the emotional whiplash is severe: pride in the craft collides with existential dread, as the very notion of “reporter” morphs into “algorithmic overseer.” The ethical shockwaves? Even deeper. Who decides what’s news when bots run the show? Is speed worth more than context? The answers are messy, and the stakes are existential.

“It’s not just about speed. It’s about trust, and trust is earned—never coded.” — Riley, senior editor, media strategist (Illustrative quote reflecting industry consensus)

From niche algorithms to headline machines: A brief history

AI’s infiltration of newsrooms didn’t happen overnight. Back in the 2010s, automation meant “robot journalists” filing basic sports results and quarterly earnings. By 2020, early adopters like The Associated Press used algorithms to churn out thousands of financial updates, but editorial oversight reigned supreme. The game changed post-2022, when large language models (LLMs) began generating context-rich, full-length articles—blurring the line between tool and replacement.

YearMilestoneImpact
2014AP uses algorithms for earningsIncreases speed, basic but accurate
2018Reuters launches Lynx InsightAI assists but doesn’t replace editors
2022GPT-3/4 models reach newsroomsFull-length, nuanced articles possible
2024Newsnest.ai debutsFully automated, real-time news generation
202532% of top news blocks OpenAIEscalates platform vs. publisher standoff

Table 1: Major milestones in AI-powered news generation, highlighting the acceleration after 2022. Source: Original analysis based on Reuters Institute, 2025, CJR, 2025

Early AI news failures? There were plenty. In 2019, a sports bot muddied the score of a major match, prompting online ridicule. In 2021, a financial update misattributed a market crash, leading to investor confusion and platform apologies. Lesson learned: speed without checks is a liability.

The myth of unbiased automation

The phrase “AI is impartial” is a fantasy—one that’s being loudly debunked. Algorithms may not hold grudges, but they inherit every bias of their creators and datasets. According to the Columbia Journalism Review, 2025, even the most advanced news AIs reproduce the slants and omissions baked into their training corpora.

Definition List: Key Terms in AI News Bias

Algorithmic bias

Systematic skew in AI outputs, reflecting prejudices present in training data or code logic. Real-world example: Overrepresentation of elite sources leads to class bias in coverage.

Training data

The massive archives of text/news used to “teach” AI how to write and interpret events. If the set is unbalanced, so is the output.

Editorial transparency

Degree to which users can audit or question how a story was produced algorithmically. Largely lacking in most platforms, and a source of contention among ethicists.

Bias creeps in at multiple stages: from the sources (think English-language, Western-centric feeds) to the curation rules (what gets included or omitted), to the post-processing tweaks made by developers with all-too-human instincts. The result? Automated news is never neutral—it’s just faster at hiding its fingerprints.

How AI-based news platforms work (and where they break down)

Inside the machine: Anatomy of an AI news generator

At its core, an AI news generator like newsnest.ai ingests a flood of structured and unstructured data—stock prices, social media, live feeds, government bulletins—then processes it through a large language model. The result? News articles, crafted in seconds, tailored for context and audience. As the Reuters Institute, 2025 notes, AI’s role in modern news is no longer superficial; it’s foundational.

AI code and headline fragments on digital screen, technical and dramatic, AI-powered news generation process

Here’s the typical flow:

  1. Data ingestion: Pulling from APIs, scrapers, and live streams.
  2. Data normalization: Filtering, deduplicating, and translating raw data into standard formats.
  3. Event detection: Identifying meaningful events using anomaly detectors and pattern recognition.
  4. Draft generation: Large language models produce human-like summaries and articles.
  5. Editorial filtering: Automated (or at times human-assisted) review for tone, bias, and relevance.
  6. Headline selection: Algorithms optimize for search, click-through rate, and clarity.
  7. Publishing and distribution: Real-time posting across web, mobile, and syndication channels.

Every step introduces speed—and risk. The fine print? Even small glitches in data filtering or model logic can snowball into headline-level disasters.

Fact-checking, hallucinations, and the speed trap

AI’s greatest trick is also its biggest liability: it’s fast, but not infallible. “Hallucinations”—plausible-sounding but false information—are an ever-present risk. According to the Poynter Institute, 2025, AI error rates in breaking news scenarios can spike under ambiguous or sparse data conditions.

ScenarioHuman error rate (%)AI error rate (%)Notable Outlier
Basic weather update21AI slightly more reliable
Breaking crisis (first hour)814AI: misattributed locations, names
Complex analysis (politics)46Both struggle with nuance
Sports results12Rare but notable hallucinations

Table 2: Human vs. AI error rates in breaking news scenarios. Source: Original analysis based on Poynter, 2025, Reuters Institute, 2025

Common red flags in AI-generated news? Look for mismatched details (places, times, attributions), robotic phrasing, and stories that lack corroborating sources even when a big claim is made. Reader beware: sometimes “faster” is just “wrong, only sooner.”

“Sometimes faster is just wrong, only sooner.” — Morgan, newsroom technologist (Illustrative quote)

The cost of automation: What gets lost in translation?

What AI still can’t code: nuance, context, or the “gut check” that separates fact from plausible fiction. For editors, the fear isn’t just automation, but the flattening of editorial judgment into a checklist.

Human and AI avatar in tense editorial standoff over unfinished story, symbolizing loss of nuance in AI journalism

Seven subtle editorial skills that AI platforms struggle to replicate:

  • Contextual intuition: Sensing when a detail is “off” or a quote is too good to be true.
  • Source vetting: Weighing the credibility of a leak versus a press release.
  • Cultural subtext: Catching dog whistles, sarcasm, or loaded language.
  • Investigative follow-up: Pushing for a second or third confirmation, not just scraping what’s available.
  • Story framing: Choosing an angle that’s relevant, not just trending.
  • Ethical discernment: Knowing when not to publish—or when a story crosses into harm.
  • Narrative cohesion: Building arcs that connect disparate facts into compelling, holistic stories.

As AI platforms become the backbone of news production, these “soft skills” risk being sidelined. The challenge? Teaching machines not just to recite, but to understand.

The wild edge: AI news platforms’ best and worst moments

When AI got it spectacularly right

Despite the pitfalls, AI-based news platforms have scored jaw-dropping scoops that left legacy newsrooms in the dust. In mid-2024, an AI-powered feed broke the news of a regional blackout in Europe 12 minutes before major broadcasters. During a sudden market sell-off, newsnest.ai flagged the event ahead of human analysts, giving subscribers a crucial head start.

AI headline side-by-side with shocked newsroom, symbolizing AI breaking news before humans

Five real-world examples of AI-led news scoops:

  1. Regional blackout, EU (2024): Notified before emergency services could issue alerts.
  2. Market flash crash (2023): Detected anomalous trading patterns, prompting early coverage.
  3. Political leak (2025): Analyzed document releases and synthesized the story hours before human reporters.
  4. Weather disaster alert (2024): Synthesized social media chatter into a verified breaking headline.
  5. Viral misinformation debunked (2025): Used cross-sourced evidence to correct a trending hoax.

Each of these moments reinforced the potential upside: when speed and reach matter most, AI can be a lifeline.

Epic fails and silent corrections: When AI news goes wrong

But the same speed that enables triumphs can backfire, and spectacularly so. In late 2023, an AI-generated story falsely reported the resignation of a high-profile CEO, triggering a minor stock sell-off—only to issue a silent correction an hour later. Another instance: a bot misattributed quotes from a satirical account as real, embarrassing editors and damaging trust.

IncidentErrorResponseOutcome
CEO resignation hoaxFalse attributionSilent correctionStock fluctuation, user backlash
Satirical quote misfireMisidentified sourceEditor apologyWithdrawal, trust bruised
International incident misreportMistook simulation for realPlatform updateRevised protocols

Table 3: High-profile AI news errors and platform responses. Source: Original analysis based on CJR, 2025, Poynter, 2025

Platforms have responded by tightening editorial filters, introducing “confidence ratings,” and—ironically—relying more on human oversight during crises.

“AI doesn’t get embarrassed. Humans do—for it.” — Riley, senior editor (Illustrative quote)

The unseen costs: Data, privacy, and the environment

AI-powered news isn’t just virtual—it’s ravenous for electricity, bandwidth, and data. According to recent research, a single platform’s annual electricity use can rival that of a small town, while the hunger for personal data raises privacy alarms among watchdogs (Reuters Institute, 2025).

Six hidden costs of AI-based news platforms:

  • Massive energy demands: Each AI-generated article taps into data centers that require vast power.
  • Data privacy risks: Continuous scraping of public and private data heightens exposure to breaches.
  • Environmental impact: Carbon emissions from server farms add to news’ ecological footprint.
  • Algorithmic opacity: Lack of transparency around how stories are selected and framed.
  • Editorial monoculture: LLMs trained on similar sets risk reducing plurality in news.
  • Talent flight: Journalists leaving the field, fearing replacement and irrelevance.

Each “free” news story churned out by AI has an invisible price tag—one paid not just by the industry, but by society at large.

Who’s winning the AI news race? Leading platforms compared

Head-to-head: Feature matrix of top AI-based news platforms

Welcome to the battlefield: dozens of AI-driven platforms now vie for supremacy, each promising better, faster, smarter news. But who’s actually delivering?

PlatformReal-time GenerationCustomizationScalabilityCostAccuracy & ReliabilityBest For
newsnest.aiYesHighly CustomizableUnlimitedSuperiorHighEnterprises, publishers
NewsGPTLimitedModerateRestrictedHighModerateGeneral audiences
AutomatedPressYesBasicModerateModerateVariableNiche blogs
SynthNewsNoHighLowHighModerateAnalytics, summaries
AIWireYesBasicUnlimitedLowLowBulk syndication

Table 4: Feature-by-feature comparison of leading AI news platforms in 2025. Source: Original analysis based on verified provider features and third-party reviews.

The winners (for now)? Platforms that balance real-time generation with accuracy and customization—think newsnest.ai for enterprise clients and AutomatedPress for niche blogs. Losers: those slow to adapt, stuck on legacy infrastructure or overpromising error-free automation.

What really matters: Beyond the marketing hype

Forget the buzzwords—what actually impacts user experience and trust in AI-based news platforms?

Definition List: Key Features Explained

Explainable AI

Systems that provide transparent reasoning or “audit trails” for how stories are generated. Trust hinges on this.

Real-time verification

Ongoing cross-checking of facts against multiple sources, not just at publication but continuously as stories evolve.

Editorial override

Human editors’ ability to halt, revise, or contextualize AI-generated content before publication.

User personas and priorities:

  • Newsroom Manager: Needs rapid, high-quality output to fill content calendars without ballooning costs.
  • Digital Publisher: Values originality and engagement for audience retention.
  • Marketing Executive: Seeks timely, targeted content for niche campaigns.
  • Freelance Journalist: Wants tools, not replacements, to augment research and reporting.

Each of these roles shapes what matters—speed, control, depth, or reach. The best platform isn’t the one with the flashiest demo; it’s the one that aligns with real-world pain points.

Checklist: How to choose an AI news platform in 2025

Before signing the dotted line, decision-makers should run a brutal self-assessment.

Nine-step checklist for evaluating AI-based news platforms:

  1. Define your must-haves: Is real-time coverage essential, or is depth more important?
  2. Audit transparency: How explainable are the platform’s algorithms?
  3. Check editorial controls: Can you override or tweak stories easily?
  4. Examine data privacy practices: What’s the platform’s record on data protection?
  5. Scrutinize error rates: Demand to see historical error data for your use case.
  6. Test customization: Can the tool adapt to your industry’s jargon and needs?
  7. Verify scalability: Will it grow with your audience or business?
  8. Assess integration ease: How does it fit into your existing workflow?
  9. Read real reviews: Not just testimonials—look for user critiques on public forums.

Run this gauntlet before making a call. The next section tackles how these choices play out in practice.

Myths, misconceptions, and inconvenient truths

AI news is always faster—and better? Not so fast.

AI’s hype machine spins a seductive tale: always on, always accurate, never sleeping. But reality is grittier. As the Columbia Journalism Review, 2025 notes, when data gets messy or stories are ambiguous, AI can actually be slower—or more error-prone—than a seasoned reporter.

Stopwatch and AI error screen side by side, illustrating AI speed versus reliability in news

Six common myths about AI-based news platforms—debunked:

  • Myth: AI news is unbiased.
    Reality: Bias seeps in via training data, filtering, and editorial choices.

  • Myth: AI always beats humans on speed.
    Reality: In complex crises, human judgment and verification can outpace bots.

  • Myth: AI news is always factual.
    Reality: Hallucinations and unverified claims still sneak into outputs.

  • Myth: Automation is cheaper for everyone.
    Reality: High infrastructure and licensing costs can outweigh savings for small players.

  • Myth: AI is killing “fake news.”
    Reality: AI also enables more sophisticated misinformation tactics.

  • Myth: Editorial jobs are dead.
    Reality: New hybrid and oversight roles are emerging, even as others vanish.

Can AI ever be truly independent?

Let’s settle this: the dream of fully autonomous, agenda-free journalism is a mirage. AI is never independent—it’s engineered, trained, and tweaked by humans with all-too-human motives. Technical independence? Maybe. Philosophical or editorial independence? Not a chance.

“Independence is a myth for anyone who’s coded by humans.” — Morgan, newsroom technologist (Illustrative quote)

Bias and influence persist at every layer: from dataset curation to the post-publication algorithms that optimize for clicks, engagement, or advertiser demands. The notion of “objective news” was always fraught—AI just exposes the scaffolding.

The human cost: What happens to journalists?

Journalists aren’t just losing jobs—they’re being forced to evolve or fade out. According to Reuters Institute, 2025, three new archetypes are rising:

  • The AI wrangler: Editors overseeing, auditing, and correcting AI outputs.
  • The hybrid reporter: Blending original investigation with AI-powered research and writing tools.
  • The outplaced specialist: Laid-off reporters migrating to niche newsletters, advocacy, or freelance gigs.

For those staying, adaptability is survival. Upskilling in data analysis, prompt engineering, and editorial review is the new baseline.

Tips for journalists adapting to the AI era:

  • Embrace new tools as amplifiers of your reach, not enemies.
  • Cultivate deep domain expertise AI can’t mimic.
  • Prioritize transparency in your workflow—show your process, not just your product.
  • Build personal brands and audience trust beyond platform algorithms.

Real-world impact: How AI news is shaping society, politics, and culture

Democratization or manipulation? The double-edged sword

AI-driven news platforms promise to democratize information—curating, translating, and distributing content to corners of the globe previously overlooked. But there’s a flip side: filter bubbles, algorithmic echo chambers, and “news deserts” where coverage is algorithmically abandoned.

Diverse readers viewing conflicting AI news feeds, symbolizing filter bubbles and democratization

Seven ways AI-based news platforms impact society:

  1. Improved access: Lowers barriers for non-English speakers and underserved regions.
  2. Customized feeds: Hyper-personalization caters to individual tastes—but can reinforce biases.
  3. Filter bubbles: Algorithms prioritize engagement, which can isolate users in echo chambers.
  4. Rapid misinformation spread: AI can amplify errors or hoaxes at unprecedented speed.
  5. Real-time crisis response: Automated alerts during disasters can save lives.
  6. Marginalized voices: Risk of underrepresentation if training data is unbalanced.
  7. Political manipulation: State actors can exploit AI news to nudge public opinion.

Fake news, deepfakes, and the arms race for truth

If you thought human-generated fake news was bad, AI has raised the stakes. In 2025, several AI-generated deepfake stories went viral before being debunked—one notable incident involved a fabricated government announcement that triggered mass confusion until forensic analysts stepped in.

Incident typeAI-generated fake newsTraditional fake newsDetection methodOutcome
Deepfake videoYesRareVideo forensics, reverse searchDebunked, but delayed
Automated text hoaxYesYesCross-source validationCorrected within hours
Image manipulationYesYesMetadata analysis, expert reviewVaries

Table 5: Side-by-side comparison of AI-generated vs. traditional fake news incidents. Source: Original analysis based on Poynter, 2025, Reuters Institute, 2025

Actionable advice for readers:

  • Always cross-check breaking news across multiple reputable sources.
  • Use digital forensics tools (reverse image/video search) for suspicious media.
  • Report potential misinformation to platform moderators and watchdog groups.
  • Stay skeptical of stories lacking corroboration or those that “feel too perfect.”
  • Bookmark fact-checking organizations—don’t rely on a single feed.

Regulation, ethics, and the future of AI-powered journalism

In response to mounting concerns, regulators worldwide are scrambling to catch up. Europe leads with mandates for disclosure and traceability in AI-generated content; the US lags, leaving much to industry self-regulation. China, meanwhile, employs AI for both news generation and censorship.

Checklist of ethical questions for platforms and readers:

  • Is the origin of each news story clear and traceable?
  • Who’s accountable for errors or harm caused by AI outputs?
  • Are audience data and privacy protected—or exploited?
  • Does the platform actively correct or merely hide its mistakes?
  • Are marginalized groups fairly represented?
  • How transparent are editorial and algorithmic decisions?

The regulatory patchwork only adds to the opacity. As platforms gain power, the need for universal standards grows more urgent.

How to get the most out of AI-based news platforms

Step-by-step guide: Mastering AI-powered news feeds

AI-powered news isn’t passive—savvy users can shape, challenge, and optimize their feeds for deeper understanding.

Eight actionable steps for configuring and interpreting AI news:

  1. Sign up and set preferences: Tailor regions, topics, and interests.
  2. Define credibility filters: Opt for stories with source citations or “confidence ratings.”
  3. Enable multi-source feeds: Don’t rely on a single aggregator—diversify.
  4. Regularly audit your “history”: See what topics you’re missing and adjust.
  5. Fact-check breaking stories: Use manual searches to corroborate.
  6. Report anomalies: Flag issues to platform moderators.
  7. Limit algorithmic reinforcement: Occasionally browse “unfiltered” or “randomized” feeds.
  8. Stay curious: Seek out perspectives that challenge your assumptions.

For optimal results, avoid falling into passive consumption. Curate actively, question persistently, and use AI as a tool—not a final authority.

Red flags: What your AI news platform isn’t telling you

Not all AI news platforms are created equal. Here’s what to watch for:

  • Opaque sourcing: Stories lacking clear bylines or citations.
  • Consistent errors: Repeated inaccuracies, especially in breaking news.
  • Lack of correction logs: No visible record of updates or retractions.
  • Unusual biases: Overemphasis on certain topics, regions, or sources.
  • Nonexistent customer support: Inability to report or resolve issues.
  • Algorithmic “dead ends”: Feeds that keep narrowing, excluding alternative views.
  • Overpromising marketing: Grand claims of “zero errors” or “100% unbiased” content.

Trust is earned—if your platform lacks transparency, reconsider your loyalty.

Advanced moves: Unconventional uses for AI-based news

AI news platforms aren’t just for passive reading—here’s how experts and power users get more value:

  • Trend spotting: Use analytics to detect emerging topics before they go mainstream.
  • Local alerts: Set hyper-local feeds for neighborhood or city-specific developments.
  • Deep-dive research: Aggregate long-form content and primary sources for investigative work.
  • Creative content: Repurpose news narratives for podcasts, newsletters, or social campaigns.
  • Emergency response: Set up automated alerts for crisis management in business or public safety.

Each approach demands experimentation and a willingness to think beyond the headline.

The next frontier: What’s coming for AI-based news platforms

The AI news story doesn’t end here. Analysts are watching three trends closely: the integration of multi-modal content (text, audio, video, and interactive graphics), the explosion of hyper-local news powered by AI, and the rise of interactive, reader-driven reporting.

AI network over city skyline with floating news headlines, symbolizing the future of AI-powered news

What ties them together? The relentless push for more engagement, more relevance, and—ironically—more human-like storytelling from machines.

Risks and safeguards: Protecting truth in the age of AI news

But innovation brings new threats: deepfakes, algorithmic manipulation, and regulatory lag are all live issues.

Six practical safeguards for readers and publishers:

  1. Prioritize platforms with explainable AI.
  2. Insist on source transparency for every story.
  3. Support cross-platform fact-checking initiatives.
  4. Advocate for real-time correction logs.
  5. Push for industry-wide ethical standards.
  6. Educate audiences in digital literacy—knowledge is the best defense.

Society’s stake? Nothing less than the integrity of the public square.

Final synthesis: Will AI kill or save journalism?

After 4000 words of brutal truths, here’s the unvarnished reality: AI-based news platforms won’t kill journalism—they’ll force its evolution, for better or worse. Platforms like newsnest.ai are showing what’s possible when speed, scale, and accuracy align, but even the best algorithms can’t erase the need for context, ethics, and human oversight. The question is not “if” journalism survives, but “how”—and “who” gets to call the shots.

“AI won’t kill journalism. It’ll force it to evolve—or die trying.” — Riley, senior editor (Illustrative quote)

For readers, creators, and technologists: your vigilance, skepticism, and demand for quality are the real engines of progress. The revolution isn’t coming. It’s here—and it’s up to you to decide whether you lead, follow, or get left behind.

Supplementary: Adjacent topics and deep dives

AI and fake news: The evolving arms race

The struggle between AI-generated misinformation and detection technology is relentless. In 2024 alone, three major deepfake incidents forced tech companies to overhaul their content moderation protocols. Every time detection tools improve, adversarial AIs adapt, escalating the cat-and-mouse game.

Seven stages of the fake news arms race:

  1. Creation: AI generates plausible falsehoods at scale.
  2. Distribution: Content spreads via social and news platforms.
  3. Initial detection: Basic tools flag some fakes.
  4. Adversarial training: Fake news AIs learn to evade detection.
  5. Enhanced forensics: Human and machine auditors develop new techniques.
  6. Public correction: Fact-checkers and platforms issue retractions.
  7. Trust erosion: Audience skepticism grows, demanding ever-better tools.

Ethics in automated journalism: Where do we draw the line?

AI-powered news raises hard ethical dilemmas—who’s the real “author,” who takes the blame, and how transparent should algorithms be?

Six open ethical questions for the future of news:

  • Should AI bylines be mandatory?
  • Who’s liable for algorithmic harm—developers, publishers, or both?
  • How do we ensure data privacy during news curation?
  • What standards should govern training data selection?
  • Can we audit or contest an AI’s editorial decisions?
  • Do platforms have a duty to preserve pluralism and marginalized perspectives?

Until these are resolved, ethical ambiguity remains the norm.

Local news, global reach: Can AI save the news desert?

AI-powered platforms are now filling gaps in local journalism that legacy outlets abandoned. In small-town America, newsnest.ai feeds are powering community updates on everything from council meetings to public health alerts, restoring a lifeline of information.

Local street with AI-powered digital news displays, blending old and new, hopeful tone

For communities eager to leverage AI:

  • Partner with platforms that prioritize local data sources.
  • Train volunteers and local authorities to audit and curate feeds.
  • Push for transparency on how stories are selected and published.
  • Advocate for “human-in-the-loop” review to ensure relevance and trust.

The future of local news may depend less on nostalgia and more on how communities adapt AI to their specific needs.

Was this article helpful?
AI-powered news generator

Ready to revolutionize your news production?

Join leading publishers who trust NewsNest.ai for instant, quality news content

Featured

More Articles

Discover more topics from AI-powered news generator

Get personalized news nowTry free