Security Research Program


Building Safer AI, Together with CygnusAlpha



Our Philosophy


We operate AI systems in production—specifically GenAI—which is genuinely uncharted territory. We’re working
with non-deterministic systems where frameworks are immature and vulnerabilities are still being discovered.
We don’t pretend to have all the answers.


That’s why we need you. We believe transparency and collaboration with the security community is the only
responsible path forward. This program is our commitment to that principle.



Program Scope




In Scope



  • Prompt injection and jailbreak vulnerabilities

  • Data leakage or unauthorized information access

  • Authentication and authorization bypass

  • API security issues in LLM tool calling

  • Hallucination exploitation

  • Context manipulation attacks

  • Rate limiting and abuse vectors




Out of Scope



  • Social engineering of CygnusAlpha employees

  • Physical security

  • Third-party services (report directly to them)

  • Theoretical vulnerabilities without proof-of-concept

  • Denial of service attacks (discuss methodology first)





Safe Harbor




We commit to:



  • No legal action against good-faith security research

  • Working with you collaboratively to understand and fix issues

  • Public recognition (if you want it)

  • Transparent communication throughout




We ask that you:



  • Don’t access or modify user data beyond proof-of-concept

  • Don’t disrupt our services

  • Give us reasonable time to fix before public disclosure (45 days standard)

  • Communicate findings privately first





What We Offer


Recognition Tier System




🥉 Bronze (Low)



  • Public acknowledgment on our Security Hall of Fame

  • Detailed case study co-authored with you (if desired)

  • LinkedIn recommendation




🥈 Silver (Medium)



  • All Bronze benefits, plus:

  • “Security Research Partner” badge

  • Early access to new AI features

  • Swag package




🥇 Gold (High/Critical)



  • All Silver benefits, plus:

  • Featured blog post/whitepaper

  • Speaking opportunity at our webinar

  • Gift card: ₹5,000-10,000




🏆 Platinum (Novel)



  • All Gold benefits, plus:

  • Named collaboration on research paper

  • Join our Security Advisory Board

  • 2% revenue share (capped at ₹50,000)




Special Recognition:


AI Security Pioneer Award: Annual recognition for researcher who contributed most to
advancing GenAI security understanding (not just finding bugs, but advancing the field).



Engagement & Response Framework


Submission Process


Where to Report:



  • Email: engage@cygnusalpha.com

  • Subject: [SECURITY] Brief description

  • PGP key available at: [your-site]/security-pgp


What to Include:



  1. Vulnerability description

  2. Step-by-step reproduction steps

  3. Impact assessment (your perspective)

  4. Any relevant screenshots/logs

  5. Suggested remediation (optional but appreciated)

  6. Your preferred contact method

  7. How you’d like to be credited (name, anonymous, company name, etc.)

Our Response Timeline



























Timeframe Action
24 hours Initial acknowledgment + case ID
72 hours Preliminary assessment + severity classification
7 days Detailed response with remediation plan
45 days Target resolution

Handling Process


Step 1: Initial Contact (Within 24h)

Step 2: Assessment (Within 72h)



  • Reproduce the vulnerability

  • Classify severity using AI-Security adapted CVSS:

    • Critical: Data exfiltration, complete jailbreak, credential access

    • High: Partial data leakage, authorization bypass, harmful content generation

    • Medium: Information disclosure, context manipulation

    • Low: Minor prompt injection with limited impact




Step 3: Detailed Response (Within 7 days)


Step 4: Resolution & Recognition



  • Fix deployed

  • Offer recognition options

  • Request feedback on our handling

  • Ask permission for public disclosure

Dispute Resolution


If a researcher disagrees with our severity assessment:



  1. We will schedule a 30-min video call to discuss.

  2. We will bring in a neutral third party from our advisory board to mediate.

  3. We will document the rationale transparently and upgrade the tier if warranted.



FAQ


Q: Why participate if bounties are small?

A: GenAI security is where cybersecurity was in the 1990s—wide open. We’re offering co-authorship,
recognition, and learning opportunities in the hottest security domain.


Q: What if I find something critical but really need cash?

A: Talk to us. We’ll be honest about constraints but will work with you—payment plans, consulting exchanges,
or connecting you with clients who might hire you.


Q: How do you determine severity in AI systems where impact is fuzzy?

A: Great question. We use adapted CVSS but add AI-specific factors. We’re transparent about our reasoning and
open to discussion. This is new territory for everyone.


Q: What happens if we disagree on disclosure timeline?

A: We commit to 45 days standard, extensible by mutual agreement. If we can’t agree, we’ll defer to a neutral
security researcher from our advisory board.