Draft v1. Product-specific disclosure. Update the model list below whenever the underlying stack changes.

AI Disclosure

Every video PixelForgeHub generates is produced by AI models. This page lists every model we use, where it was trained, who hosts it, and how a human reviews each output before delivery. It exists to comply with California SB 942 (AI Transparency Act) and AB 2013, EU AI Act Article 50, and the India IT Rules 2021 amendments on synthetic content.

Last updated: 22 April 2026 · Model stack may change — check back periodically.

1. What is AI-generated

Every video generated via PixelForgeHub is AI-assisted:

  • Script — drafted by a large language model, then reviewed by a PixelForgeHub editor for factual claims and tone.
  • Voiceover — synthesized via a neural TTS model. Voices in our library are either licensed from the provider or cloned from a sample the customer owns rights to.
  • Images & visual effects — rendered by diffusion-based image models.
  • Motion & transitions — produced with programmatic animation (Remotion + FFmpeg), no generative video models are used for screen recordings.

2. Models in our current stack

Listed here so enterprise legal teams can conduct their own due-diligence. Models are swapped as newer versions ship; the commitment is that every active model is Apache/MIT/RAIL-M or otherwise commercially permissive.

Text & scriptwriting

  • Google Gemini 2.5 Flash / Pro— via Google AI Studio; used for demo-script generation, scene planning, and caption polish. Training data described in Google's public documentation.
  • OpenAI GPT-4.1-mini (optional) — fallback when Gemini is unavailable; used only with customer opt-in.

Voice & audio

  • Microsoft Edge-TTS neural voices — primary narration layer. Multilingual (75+ languages). No customer audio is sent; we synthesize text only.
  • Groq Whisper-large-v3-turbo — used only for transcribing the narration we generate ourselves, to produce burned-in captions. No customer audio is transcribed.
  • IndexTTS-2 / F5-TTS (optional) — available for customers who need Hindi/Marathi emotion-controlled narration.

Image & background generation

  • FLUX.2 [klein] 4B — Apache 2.0; primary commercial image model for hero shots.
  • Qwen-Image 2.0 — Apache 2.0; used when accurate in-image text is needed.
  • SDXL + JuggernautXL — CreativeML RAIL-M; photorealistic fallback.

Color palette extraction

  • node-vibrant — MIT; extracts 6 dominant swatches from your website screenshot so the video theme matches your brand automatically. No data leaves the render machine.

3. Training data provenance

We do not train our own foundation models. We integrate models published by the providers above. Each provider publishes its own training-data documentation — see the linked vendor names above for their policies.

We do not:

  • Train on customer prompts or uploads.
  • Retain generated outputs for model improvement beyond the contractual delivery window.
  • Fine-tune models on individual customer data by default.

4. Human review

Every video delivered to a paying customer passes through at least one human review stage before distribution:

  • Script reviewed for factual claims and AUP compliance.
  • Voiceover spot-checked for mispronunciation and accidental sensitive content.
  • Final render viewed end-to-end by a PixelForgeHub editor.

Self-serve plans (free tier, Starter) may bypass human review for speed — those outputs are clearly labeled "Auto-delivered" in the dashboard, and both SB 942 AI disclosure and a SynthID-equivalent invisible watermark are embedded automatically.

5. Watermarking & provenance

Every video we produce carries:

  • An in-frame "AI-generated" label visible for the first 3 seconds of the output (removable by contract on Enterprise tier, with documented customer responsibility for downstream disclosure).
  • C2PA-compatible provenance metadata in the MP4 container, listing the model stack used.
  • A persistent watermark ID tied to the rendering job, so we can trace any output back to the originating account.

6. Bias, safety, and limitations

AI models reflect the biases of their training data. PixelForgeHub outputs may:

  • Produce inaccurate or outdated factual claims.
  • Over-represent or under-represent particular demographics in visual scenes.
  • Fail on uncommon languages or regional accents beyond our tested 75-language set.

We recommend every customer review generated content before public distribution. We are not liable for downstream consequences of AI-generated content distributed without such review.

7. How to request a DPIA or vendor-security packet

Enterprise customers and EU data controllers can request a Data Protection Impact Assessment (DPIA) template, our latest subprocessor list, and vendor-security responses by emailing yogeshnichal@gmail.com. Response within 5 business days.

8. Changes to this disclosure

When we add, remove, or swap a model in our stack we update this page and note the change on /changelog. Material changes (adding a customer-data-training model, for example) are emailed to subscribers 14 days in advance.

Contact: yogeshnichal@gmail.com · This page is published under the Terms of Service.