About

What is Smol Gardens?

A Civic Builder's Guide to Accountable Tech

Origins

Smol Gardens began as a concept presented by the Femmecubator team at BetaNYC's UnSchool of Data 2026 conference on March 29th, emerging from their Open Civic Tech initiatives.

The project's name draws inspiration from Anil Dash's 2012 talk "The Web We Lost," which critiqued the internet's evolution into isolated, walled gardens. Today, as AI reshapes the technological landscape, Smol Gardens invites civic technologists to challenge LLM platform usage and explore alternative AI tools in civic tech tool building.

Problem Statement

Frames the core challenge: how to develop civic tech responsibly in the age of AI without creating redundant, unmaintainable, or harmful solutions. Highlights the tension between speed and consequence.

Our Proposal

Smol Gardens establishes a groundwork for iterative study through three phases:

  1. Impact Framework

    Create data tracking measuring human, systems, and environmental consequences. Embed accountability into every decision—not as afterthought, but as foundation.

  2. Design Workshops

    Invite civic technologists document actual AI usage through a "diary mission," revealing hidden costs and building collective responsibility.

  3. Accountable Tools Platform

    Use small language models (SLMs) to build a curated repository of civic tech tools reviewed for ethical impact and community benefit. Every tool documents its tradeoffs.

Why This Matters

Small language models operate independently—locally, offline, without corporate dependencies. This is about accountability. SLMs let civic tech teams own their tools, answering to communities, not corporations. Accountability flows directly to those being served, not shareholders or distant platforms.

The Core Question

Is it possible to make meaningful civic tech while maintaining accountability to our communities? Smol Gardens says yes—but only if we ask difficult questions first, measure consequences, and design with intention.

Who Is This For?

This workshop is for civic technologists who are already using AI tools in their work but feel caught between the pressure to move fast and the need to build responsibly. Participants will document what's really happening when they use AI—revealing hidden costs and consequences that usually stay invisible—so they can make more intentional choices and hold their tools accountable to their communities.

Framework

Guiding Principles

Our Foundation

As we track evidence, we're grounding this work in three core areas: AI's impact on humanity and society, future systems, and the environment.

Start with Tooling Decisions

We're prioritizing small language models—such as SmolLM, a family of compact small language models (SLMs) released by the Hugging Face team enabling models to run locally on laptops and mobile devices. This approach reinforces our commitment to "work small and local," creating tools that are accessible and community-driven rather than dependent on corporate infrastructure.

Apply Second-Order Thinking

We ask difficult questions: Who does this benefit? What's the real cost of inaction? What tradeoffs and consequences emerge for humanity, systems, and the environment? This discipline ensures we're building with intention, not just momentum.

Invest in Evergreen Systems

We design reusable work and make it openly available so others don't repeat the same effort. This multiplies impact and builds collective knowledge.

Critique the First Draft

With an open-source ethos, we invite builders to join us in collaborative #goodvibing—challenging our work, pushing it further, and making it better together.

Framework

Current Challenges

I. Current State of the System

This work addresses a fundamental question: Accountability in Tech, specifically with the process of vibecoding or rapid prototyping with AI. Vibecoding offers accelerated development and efficiency gains, yet it presents a complex tradeoff. Can we harness its benefits while mitigating its harms?

Key Question

Can vibecoding exist on more ethical platforms?

The Contradiction

Large orgs see AI-powered development as a cost-cutting alternative, one that could replace one thousand interns. Yet the reality is more complicated. Data centers are proliferating to support a trillion-dollar LLM industry, creating staggering environmental costs. When we apply second-order thinking, deeper issues emerge: the process introduces redundancy, creates maintenance burdens, raises data sovereignty concerns, and opens vectors for malware.

Policy Lag

Ethical AI policy and legislation are trailing far behind the speed of deployment. The technology is already being misused — malvertising via vibecoded sites, unauthorized use of writers' and designers' work, and accelerating narratives about the diminishing value of human creative labor.

The Opportunity

Designers and community builders have historically been locked out of building tools without technical collaborators. Vibecoding changes that equation — but at what cost?

II. Community Feedback: Unschool of Data Feedback Session

We brainstormed with 15 attendees at Unschool of Data and captured themes from their feedback on this issue:

Discussion Points

  1. Compliance & Accessibility Gaps — 508 compliance, security best practices missing
  2. Maintenance & Scalability Issues — Code works initially, fails to scale
  3. Convergence Toward Average — All LLM solutions look the same, everything is a result of statistical average / vanilla quality content / slop
  4. Skill & Community Erosion — Automation replaces human collaboration and community building
  5. Environmental & Health Consequences — Real harm to vulnerable communities (Memphis data center example)
  6. Redundant Solutions — Building another e-commerce platform when Shopify exists
  7. Ownership & Accountability Gaps — No clear governance or long-term responsibility
  8. Black Box Training Data — Unknown sources, no attribution, no consent
  9. Bias & Marginalized Voices — Solutions amplify privileged perspectives, silence marginalized groups

The 7 Themes

  • Operational Sustainability — Production-readiness, compliance, transparency
  • Innovation & Diversity — Moving away from homogenized solutions
  • Human & Community Impact — Augmenting vs. replacing human work
  • Environmental & Physical Harm — Real consequences in real communities
  • Transparency & Accountability — Attribution, consent, open knowledge commons
  • Resource Efficiency — Prevent redundancy, reuse existing solutions
  • Governance & Ownership — Communities own and control their own tools
Framework

Data Index Checklist

Overview

Defining the scorecard and data points around these core ideas: We are looking to create a system of scoring work to measure Human, Environment and Systems Impact. Not Met / Partially Met / Fully Met, with qualitative notes explaining the assessment.

Example Evaluation Checklist

Below is how a project might be assessed across key criteria:

Meets Core Purpose — Is the solution mission-aligned?
Uplift of Labor & Skills — Do participants gain new skills?
Reusable Work for Collective — Is work documented and shareable?
Cognitive Sovereignty — Did builders make deliberate tool choices?
Environmental Impact — Measured and documented?

Meets Core Purpose

  • Is the solution mission-aligned, or has it drifted toward efficiency gains that don't serve the intended community?
  • Does the tool directly address the civic problem it was designed to solve?
  • Can participants articulate why this work matters to their specific context?

Uplift of Labor and Skills

  • Does building this tool expand participants' capabilities or deepen their expertise?
  • Are new skills gained (technical, collaborative, domain knowledge)?
  • Does the work create pathways for others to learn, or does it require a specialist to maintain?
  • Are participants compensated fairly for their intellectual contribution?
  • Does the process encourage mentorship or knowledge-sharing rather than isolated work?

Reusable Work for the Collective

  • Is the code, design, or methodology documented and shareable?
  • Can other builders adopt or adapt this work without starting from scratch?
  • Are the learnings published in a format that reduces future effort?
  • Does the work strengthen the commons rather than create isolated tools?

Cognitive Sovereignty

  • Did participants make deliberate choices about which tools and approaches to use, or did they default to what AI suggested?
  • Do builders understand the reasoning behind their technical decisions?
  • Is there space for human judgment to override AI recommendations?
  • Can participants explain their work to others without relying on AI explanations?

The Core Question

Is speed worth these costs? Or can we build differently?

Quality and Trust Erosion

Fast iteration cycles can mean less testing, fewer edge cases caught, and tools deployed before their real-world harms are understood. This erodes public trust in institutions and technologies, especially in civic contexts where accuracy matters.

Dependency on Corporate Infrastructure

Using large language models from Big Tech creates lock-in. Communities become dependent on platforms they don't control, with terms of service that can change overnight. This undermines digital sovereignty and community autonomy.

Environmental and Resource Costs

Scale matters. Data centers powering LLM inference consume enormous energy. The speed benefit comes at an environmental cost that isn't always visible to builders, creating a hidden externality that future generations bear.

Cognitive and Decision-Making Risks

When we outsource thinking to AI tools, we risk atrophying critical judgment. Civic technologists making decisions for communities need to understand tradeoffs deeply—not just move fast and delegate reasoning to a model.

Systemic Fragility

Tools built quickly without maintenance planning or documentation create technical debt. Communities inherit fragile systems that break without the original builders present, leaving civic infrastructure brittle.

Scoping to Vibecoding or Rapid-prototyping Activities

Measuring the environmental cost of vibecoding is a core part of this experiment.

Measurement Framework

  • Track model size and inference calls per project
  • Estimate compute cost using available emissions calculators
  • Compare against equivalent human-collaborator workflow
  • Publish findings as part of the Evergreen output so others can replicate the measurement
Design Challenge

30-Day Challenge

Overview

The Smol Gardens 30-Day Challenge is an intensive, collaborative experiment bringing together civic tech builders, designers, and community members to build meaningful tools using ethical AI practices.

Timeline

30 consecutive days of building, documenting, and evaluating civic technology projects.

Who Can Participate

  • Civic Tech Builders: Developers, designers, and technologists working in the civic space
  • Domain Experts: Education, democracy, and social change specialists
  • Community Members: People directly affected by civic technology decisions
  • Crit Reviewers: Senior practitioners providing feedback and guidance

What You'll Build

Drawing from Mozilla Foundation's AI for Democracy tools, civic tech innovators can use vibecoding—rapid prototyping grounded in community needs—to build solutions across three critical areas:

1. Enable Better Information

Systems that diversify information flows and help communities identify and amplify reliable sources through collective verification, reducing dependence on extractive platform monopolies. Tools that reveal algorithmic influence and help people understand how their information is being shaped and filtered.

Success looks like: Communities have access to trustworthy information they can verify themselves. People can identify manipulation and propaganda. Diverse voices are heard. Users understand how algorithms shape what they see and have agency to change it.

2. Build Institutional Transparency and Accountability

Government transparency systems that track decision-making and make it accessible. Public data infrastructure that makes government and corporate data accessible, analyzable, and actionable for advocacy. Accountability mechanisms that connect government actions to their impacts on communities. Participatory decision-making tools that integrate public input directly into governance decisions and resource allocation.

Success looks like: Citizens can track how power is used and hold decision-makers accountable. Government data is accessible and machine-readable for advocacy. Communities have real influence over decisions affecting them.

3. Protect and Expand Civic Space

Privacy-preserving coordination tools that enable secure communication for activists and communities under threat. Surveillance resistance infrastructure that helps communities maintain digital autonomy and defend the digital public sphere from authoritarian misuse.

Success looks like: Organizers and activists can coordinate safely despite surveillance or repression. Marginalized communities can participate in civic life without exposing themselves to harm. Civic spaces remain open for organizing, even under hostile conditions.

Note

Build your proposal with your community, not from an AI prompt. Ground every feature in actual user needs and constraints.

Key Commitments

  • Use small language models (Mistral, Bloom Science) instead of large commercial models
  • Document your process daily through video diaries or written logs
  • Participate in weekly group check-ins and feedback sessions
  • Publish your work as open-source at the end of the challenge
  • Complete environmental impact assessment for your project
Design Challenge

Project Plan

30-Day Structure

Phase Days Focus
Discovery 1–5 Problem identification, community interviews, POV statement writing
Ideation 6–12 Concept sketching, tool selection, architecture planning
Build 13–25 Prototype development, user testing, iteration cycles
Polish & Document 26–30 Finalization, environmental audit, report writing, presentation prep

Daily Cadence

  • Individual Work: 2–3 hours per day on your project
  • Documentation: 30 minutes of journaling or video diary entries
  • Weekly Check-in: 1 hour group meeting for peer feedback and updates
  • Crit Sessions: Days 28–30 final presentations and structured feedback

Team Roles & Commitments

Role Responsibilities Commitment
Lead Facilitator Overall coordination, orientation facilitation, final report authorship Full-time, 30 days
Participants (5–10) Project selection, building, daily documentation, evaluation, design crit ~2–3 hrs/day
Crit Reviewers (3–5) Attend final design critique, provide written feedback Days 28–30 only
Open-Source Community Challenge first drafts, contribute to #goodvibing, fork and reuse outputs Ongoing / voluntary
Design Challenge

Success Metrics

Definition of Success

This experiment does not measure success by whether vibecoding is vindicated. Success is defined as:

  • Honest, documented findings that the community can build on
  • At least one project reaching #implemented status
  • A reusable rubric and scorecard available to other civic tech builders
  • A published report that contributes to the ethical AI conversation from a designer-first, community-first perspective
  • Clarity on whether this path is worth continuing — or a well-reasoned case for why it is not

Evaluation Criteria

Projects will be assessed across multiple dimensions:

  • Community Impact: Does the tool address a real need? Will it reach and serve people?
  • Design Quality: Is the solution coherent, usable, and well-crafted?
  • Ethical Grounding: Does the project reflect the guiding principles? Is it environmentally conscious?
  • Documentation: Is the process documented in a way others can learn from?
  • Reusability: Can other builders fork, adapt, and improve this work?

Deliverables

  • Working prototype or MVP (minimum viable product)
  • Daily documentation logs (video diary or written journal)
  • Final presentation and demo video

Community Contribution

All projects become part of the Smol Gardens archive, available for the community to:

  • Learn from methodology and approach
  • Fork and adapt for their own contexts
  • Contribute improvements and extensions
  • Reference in their own civic tech work