LoRA Training Thought Leadership
A track record of LoRA training expertise spanning SDXL, Flux Dev, Wan 2.1, and Marey 1.5 — from platform-scale model libraries to 1080p video generation.
Overview
A body of technical documentation, guides, and hands-on methodology developed across multiple generations of AI model architecture — from image generation through 1080p video. The work spans platform-level LoRA libraries, open-source tooling adoption, cloud training pipelines, and production deployment for studio clients, demonstrating structural understanding of AI models across architectures, platforms, and phases of the technology.
Origins — SD 1.5 and the RTX 3090
It started with building a machine. I put together a local training rig around an RTX 3090 and we began fine-tuning on Stable Diffusion 1.5 — learning the craft from the ground up, hands on hardware. That early work put Promptcrafted on the map and gave us the visibility that led to everything that followed.
Scenario AI — SDXL Pipeline Development
That visibility led directly to Scenario. We worked with the CTO and CEO of Scenario to develop the SDXL LoRA training pipeline for their platform. This included curating and training a library of LoRA models that shipped as part of the product offering. The engagement also brought us into enterprise rooms — presenting to and working directly with major video game publishers, global advertising brands in their international offices — providing not just the models and pipelines but the technical education, creative demonstration, and strategic framing that closes the deal. The engagement established the core methodology — dataset preparation, hyperparameter optimization, quality validation — that would carry forward through every subsequent architecture.
Flux Dev — Early Adoption with AI Toolkit
We were among the first to work with Ostris’ AI Toolkit to train Flux Dev and Wan 2.1 LoRAs. Our walkthrough video training Flux Dev locally on an RTX 3090 has over 44,000 views — a practical demonstration led by Araminta K and Ostris, on the same RTX 3090 we started training SD 1.5 on. The video became a go-to resource for practitioners adopting the new architecture.
Wan 2.1 — Cloud Training Pipeline
We developed and deployed a Wan 2.1 Musubi Tuner training script based on Kohya for cloud-based LoRA training, working closely with a coding professional to build the implementation. This alternative training method expanded our capability beyond local GPU workflows into scalable cloud infrastructure — a critical step for production-volume model delivery.
Marey 1.5 — 1080p Video Generation
All of this expertise converged on Marey 1.5, an ethically trained model generating 1080p video at a significantly larger scale than any architecture we had worked with previously. Training LoRAs at this level — validated that the methodology we had built across SDXL, Flux Dev, and Wan 2.1 was genuinely architecture-agnostic. The structural understanding transfers. We were also essential members of the enterprise sales team here — the same range that worked at Scenario carried forward: understanding the architecture deeply enough to train it, and understanding the business deeply enough to sell it.
Documentation
The technical guides themselves cover dataset preparation and quality requirements, training hyperparameter selection and scheduling, common failure modes and debugging strategies, ComfyUI integration for inference testing, and best practices for model versioning and distribution. Written for product teams, studio partners, and external collaborators integrating generative AI into professional production environments.