Smol Gardens began as a concept presented by the Femmecubator team at BetaNYC's UnSchool of Data 2026 conference on March 29th, emerging from their Open Civic Tech initiatives.
The project's name draws inspiration from Anil Dash's 2012 talk "The Web We Lost," which critiqued the internet's evolution into isolated, walled gardens. Today, as AI reshapes the technological landscape, Smol Gardens invites civic technologists to challenge LLM platform usage and explore alternative AI tools in civic tech tool building.
Frames the core challenge: how to develop civic tech responsibly in the age of AI without creating redundant, unmaintainable, or harmful solutions. Highlights the tension between speed and consequence.
Smol Gardens establishes a groundwork for iterative study through three phases:
Create data tracking measuring human, systems, and environmental consequences. Embed accountability into every decision—not as afterthought, but as foundation.
Invite civic technologists document actual AI usage through a "diary mission," revealing hidden costs and building collective responsibility.
Use small language models (SLMs) to build a curated repository of civic tech tools reviewed for ethical impact and community benefit. Every tool documents its tradeoffs.
Small language models operate independently—locally, offline, without corporate dependencies. This is about accountability. SLMs let civic tech teams own their tools, answering to communities, not corporations. Accountability flows directly to those being served, not shareholders or distant platforms.
Is it possible to make meaningful civic tech while maintaining accountability to our communities? Smol Gardens says yes—but only if we ask difficult questions first, measure consequences, and design with intention.
This workshop is for civic technologists who are already using AI tools in their work but feel caught between the pressure to move fast and the need to build responsibly. Participants will document what's really happening when they use AI—revealing hidden costs and consequences that usually stay invisible—so they can make more intentional choices and hold their tools accountable to their communities.
As we track evidence, we're grounding this work in three core areas: AI's impact on humanity and society, future systems, and the environment.
We're prioritizing small language models—such as SmolLM, a family of compact small language models (SLMs) released by the Hugging Face team enabling models to run locally on laptops and mobile devices. This approach reinforces our commitment to "work small and local," creating tools that are accessible and community-driven rather than dependent on corporate infrastructure.
We ask difficult questions: Who does this benefit? What's the real cost of inaction? What tradeoffs and consequences emerge for humanity, systems, and the environment? This discipline ensures we're building with intention, not just momentum.
We design reusable work and make it openly available so others don't repeat the same effort. This multiplies impact and builds collective knowledge.
With an open-source ethos, we invite builders to join us in collaborative #goodvibing—challenging our work, pushing it further, and making it better together.
This work addresses a fundamental question: Accountability in Tech, specifically with the process of vibecoding or rapid prototyping with AI. Vibecoding offers accelerated development and efficiency gains, yet it presents a complex tradeoff. Can we harness its benefits while mitigating its harms?
Can vibecoding exist on more ethical platforms?
Large orgs see AI-powered development as a cost-cutting alternative, one that could replace one thousand interns. Yet the reality is more complicated. Data centers are proliferating to support a trillion-dollar LLM industry, creating staggering environmental costs. When we apply second-order thinking, deeper issues emerge: the process introduces redundancy, creates maintenance burdens, raises data sovereignty concerns, and opens vectors for malware.
Ethical AI policy and legislation are trailing far behind the speed of deployment. The technology is already being misused — malvertising via vibecoded sites, unauthorized use of writers' and designers' work, and accelerating narratives about the diminishing value of human creative labor.
Designers and community builders have historically been locked out of building tools without technical collaborators. Vibecoding changes that equation — but at what cost?
We brainstormed with 15 attendees at Unschool of Data and captured themes from their feedback on this issue:
Defining the scorecard and data points around these core ideas: We are looking to create a system of scoring work to measure Human, Environment and Systems Impact. Not Met / Partially Met / Fully Met, with qualitative notes explaining the assessment.
Below is how a project might be assessed across key criteria:
Is speed worth these costs? Or can we build differently?
Fast iteration cycles can mean less testing, fewer edge cases caught, and tools deployed before their real-world harms are understood. This erodes public trust in institutions and technologies, especially in civic contexts where accuracy matters.
Using large language models from Big Tech creates lock-in. Communities become dependent on platforms they don't control, with terms of service that can change overnight. This undermines digital sovereignty and community autonomy.
Scale matters. Data centers powering LLM inference consume enormous energy. The speed benefit comes at an environmental cost that isn't always visible to builders, creating a hidden externality that future generations bear.
When we outsource thinking to AI tools, we risk atrophying critical judgment. Civic technologists making decisions for communities need to understand tradeoffs deeply—not just move fast and delegate reasoning to a model.
Tools built quickly without maintenance planning or documentation create technical debt. Communities inherit fragile systems that break without the original builders present, leaving civic infrastructure brittle.
Measuring the environmental cost of vibecoding is a core part of this experiment.
The Smol Gardens 30-Day Challenge is an intensive, collaborative experiment bringing together civic tech builders, designers, and community members to build meaningful tools using ethical AI practices.
30 consecutive days of building, documenting, and evaluating civic technology projects.
Drawing from Mozilla Foundation's AI for Democracy tools, civic tech innovators can use vibecoding—rapid prototyping grounded in community needs—to build solutions across three critical areas:
Systems that diversify information flows and help communities identify and amplify reliable sources through collective verification, reducing dependence on extractive platform monopolies. Tools that reveal algorithmic influence and help people understand how their information is being shaped and filtered.
Government transparency systems that track decision-making and make it accessible. Public data infrastructure that makes government and corporate data accessible, analyzable, and actionable for advocacy. Accountability mechanisms that connect government actions to their impacts on communities. Participatory decision-making tools that integrate public input directly into governance decisions and resource allocation.
Privacy-preserving coordination tools that enable secure communication for activists and communities under threat. Surveillance resistance infrastructure that helps communities maintain digital autonomy and defend the digital public sphere from authoritarian misuse.
Build your proposal with your community, not from an AI prompt. Ground every feature in actual user needs and constraints.
| Phase | Days | Focus |
|---|---|---|
| Discovery | 1–5 | Problem identification, community interviews, POV statement writing |
| Ideation | 6–12 | Concept sketching, tool selection, architecture planning |
| Build | 13–25 | Prototype development, user testing, iteration cycles |
| Polish & Document | 26–30 | Finalization, environmental audit, report writing, presentation prep |
| Role | Responsibilities | Commitment |
|---|---|---|
| Lead Facilitator | Overall coordination, orientation facilitation, final report authorship | Full-time, 30 days |
| Participants (5–10) | Project selection, building, daily documentation, evaluation, design crit | ~2–3 hrs/day |
| Crit Reviewers (3–5) | Attend final design critique, provide written feedback | Days 28–30 only |
| Open-Source Community | Challenge first drafts, contribute to #goodvibing, fork and reuse outputs | Ongoing / voluntary |
This experiment does not measure success by whether vibecoding is vindicated. Success is defined as:
Projects will be assessed across multiple dimensions:
All projects become part of the Smol Gardens archive, available for the community to: