Google Antigravity Directory

The #1 directory for Google Antigravity prompts, rules, workflows & MCP servers. Optimized for Gemini 3 agentic development.

Resources

PromptsMCP ServersAntigravity RulesGEMINI.md GuideBest Practices

Company

Submit PromptAntigravityAI.directory

Popular Prompts

Next.js 14 App RouterReact TypeScriptTypeScript AdvancedFastAPI GuideDocker Best Practices

Legal

Privacy PolicyTerms of ServiceContact Us
Featured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver ToolsFeatured on FazierFeatured on WayfindioAntigravity AI - Featured on Startup FameFeatured on Wired BusinessFeatured on Twelve ToolsListed on Turbo0Featured on findly.toolsFeatured on Aura++That App ShowAI ToolzShinyLaunchMillion Dot HomepageSolver Tools

© 2026 Antigravity AI Directory. All rights reserved.

The #1 directory for Google Antigravity IDE

This website is not affiliated with, endorsed by, or associated with Google LLC. "Google" and "Gemini" are trademarks of Google LLC.

Antigravity AI Directory
PromptsMCPBest PracticesUse CasesLearn
Home
Prompts
Testing Library Component Patterns

Testing Library Component Patterns

Modern component testing with Testing Library for Google Antigravity projects including queries, user events, and async testing.

testing-librarytestingreactcomponentsaccessibility
by Antigravity Team
⭐0Stars
.antigravity
# Testing Library Component Patterns for Google Antigravity

Master component testing with Testing Library in your Google Antigravity IDE projects. This comprehensive guide covers queries, user events, async testing, and accessibility patterns optimized for Gemini 3 agentic development.

## Testing Setup

Configure Testing Library with Vitest:

```typescript
// vitest.config.ts
import { defineConfig } from 'vitest/config';
import react from '@vitejs/plugin-react';
import tsconfigPaths from 'vite-tsconfig-paths';

export default defineConfig({
  plugins: [react(), tsconfigPaths()],
  test: {
    globals: true,
    environment: 'jsdom',
    setupFiles: ['./tests/setup.ts'],
    include: ['**/*.{test,spec}.{ts,tsx}'],
    coverage: {
      provider: 'v8',
      reporter: ['text', 'json', 'html'],
      thresholds: { lines: 80, functions: 80, branches: 80 },
    },
  },
});
```

```typescript
// tests/setup.ts
import '@testing-library/jest-dom/vitest';
import { cleanup } from '@testing-library/react';
import { afterEach, vi } from 'vitest';

afterEach(() => {
  cleanup();
});

// Mock fetch
global.fetch = vi.fn();

// Mock IntersectionObserver
vi.stubGlobal('IntersectionObserver', class {
  observe = vi.fn();
  unobserve = vi.fn();
  disconnect = vi.fn();
});
```

## Query Patterns

Use the right queries for accessibility:

```typescript
// components/PromptCard.test.tsx
import { describe, it, expect } from 'vitest';
import { render, screen } from '@testing-library/react';
import { PromptCard } from './PromptCard';

const mockPrompt = {
  id: '1',
  slug: 'test-prompt',
  title: 'Test Prompt',
  description: 'A test prompt description',
  tags: ['react', 'typescript'],
  starCount: 42,
};

describe('PromptCard', () => {
  it('renders prompt information', () => {
    render(<PromptCard prompt={mockPrompt} />);

    // getByRole - best for accessibility
    expect(screen.getByRole('heading', { name: /test prompt/i })).toBeInTheDocument();
    
    // getByText - for static text
    expect(screen.getByText(/a test prompt description/i)).toBeInTheDocument();
    
    // getByRole with name
    expect(screen.getByRole('link', { name: /test prompt/i }))
      .toHaveAttribute('href', '/prompts/test-prompt');
  });

  it('displays tags', () => {
    render(<PromptCard prompt={mockPrompt} />);
    
    expect(screen.getByText('react')).toBeInTheDocument();
    expect(screen.getByText('typescript')).toBeInTheDocument();
  });

  it('shows star count', () => {
    render(<PromptCard prompt={mockPrompt} />);
    expect(screen.getByText('42')).toBeInTheDocument();
  });
});
```

## User Event Testing

Test user interactions:

```typescript
// components/SearchInput.test.tsx
import { describe, it, expect, vi } from 'vitest';
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { SearchInput } from './SearchInput';

describe('SearchInput', () => {
  it('calls onChange when typing', async () => {
    const user = userEvent.setup();
    const onChange = vi.fn();
    
    render(<SearchInput value="" onChange={onChange} />);
    
    const input = screen.getByRole('searchbox');
    await user.type(input, 'react');
    
    expect(onChange).toHaveBeenCalledTimes(5);
    expect(onChange).toHaveBeenLastCalledWith('react');
  });

  it('clears input on escape', async () => {
    const user = userEvent.setup();
    const onChange = vi.fn();
    
    render(<SearchInput value="react" onChange={onChange} />);
    
    const input = screen.getByRole('searchbox');
    await user.click(input);
    await user.keyboard('{Escape}');
    
    expect(onChange).toHaveBeenCalledWith('');
  });

  it('submits on enter', async () => {
    const user = userEvent.setup();
    const onSubmit = vi.fn();
    
    render(<SearchInput value="react" onChange={() => {}} onSubmit={onSubmit} />);
    
    const input = screen.getByRole('searchbox');
    await user.click(input);
    await user.keyboard('{Enter}');
    
    expect(onSubmit).toHaveBeenCalledWith('react');
  });
});
```

## Async Testing

Test async operations:

```typescript
// components/PromptList.test.tsx
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen, waitFor } from '@testing-library/react';
import { PromptList } from './PromptList';

const mockFetch = vi.fn();
global.fetch = mockFetch;

describe('PromptList', () => {
  beforeEach(() => {
    mockFetch.mockReset();
  });

  it('shows loading state initially', () => {
    mockFetch.mockResolvedValueOnce({
      ok: true,
      json: () => Promise.resolve({ prompts: [] }),
    });

    render(<PromptList />);
    
    expect(screen.getByRole('status')).toHaveTextContent(/loading/i);
  });

  it('renders prompts after loading', async () => {
    mockFetch.mockResolvedValueOnce({
      ok: true,
      json: () => Promise.resolve({
        prompts: [
          { id: '1', title: 'First Prompt', slug: 'first' },
          { id: '2', title: 'Second Prompt', slug: 'second' },
        ],
      }),
    });

    render(<PromptList />);

    await waitFor(() => {
      expect(screen.getByText('First Prompt')).toBeInTheDocument();
    });
    
    expect(screen.getByText('Second Prompt')).toBeInTheDocument();
  });

  it('shows error state on failure', async () => {
    mockFetch.mockRejectedValueOnce(new Error('Failed to fetch'));

    render(<PromptList />);

    await waitFor(() => {
      expect(screen.getByRole('alert')).toHaveTextContent(/error/i);
    });
  });

  it('shows empty state when no prompts', async () => {
    mockFetch.mockResolvedValueOnce({
      ok: true,
      json: () => Promise.resolve({ prompts: [] }),
    });

    render(<PromptList />);

    await waitFor(() => {
      expect(screen.getByText(/no prompts found/i)).toBeInTheDocument();
    });
  });
});
```

## Form Testing

Test form interactions:

```typescript
// components/PromptForm.test.tsx
import { describe, it, expect, vi } from 'vitest';
import { render, screen, waitFor } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import { PromptForm } from './PromptForm';

describe('PromptForm', () => {
  it('submits form with valid data', async () => {
    const user = userEvent.setup();
    const onSubmit = vi.fn();
    
    render(<PromptForm onSubmit={onSubmit} />);

    await user.type(screen.getByLabelText(/title/i), 'My Prompt');
    await user.type(screen.getByLabelText(/description/i), 'A great description here');
    await user.type(screen.getByLabelText(/content/i), 'Prompt content goes here...');
    
    await user.click(screen.getByRole('button', { name: /submit/i }));

    await waitFor(() => {
      expect(onSubmit).toHaveBeenCalledWith({
        title: 'My Prompt',
        description: 'A great description here',
        content: 'Prompt content goes here...',
      });
    });
  });

  it('shows validation errors', async () => {
    const user = userEvent.setup();
    
    render(<PromptForm onSubmit={() => {}} />);

    await user.click(screen.getByRole('button', { name: /submit/i }));

    await waitFor(() => {
      expect(screen.getByText(/title is required/i)).toBeInTheDocument();
    });
  });
});
```

## Best Practices

1. **Use getByRole** as the primary query for accessibility
2. **Set up userEvent** for realistic interactions
3. **Use waitFor** for async assertions
4. **Mock external dependencies** appropriately
5. **Test behavior, not implementation** details
6. **Write meaningful assertions** with clear expectations
7. **Group related tests** with describe blocks

When to Use This Prompt

This testing-library prompt is ideal for developers working on:

  • testing-library applications requiring modern best practices and optimal performance
  • Projects that need production-ready testing-library code with proper error handling
  • Teams looking to standardize their testing-library development workflow
  • Developers wanting to learn industry-standard testing-library patterns and techniques

By using this prompt, you can save hours of manual coding and ensure best practices are followed from the start. It's particularly valuable for teams looking to maintain consistency across their testing-library implementations.

How to Use

  1. Copy the prompt - Click the copy button above to copy the entire prompt to your clipboard
  2. Paste into your AI assistant - Use with Claude, ChatGPT, Cursor, or any AI coding tool
  3. Customize as needed - Adjust the prompt based on your specific requirements
  4. Review the output - Always review generated code for security and correctness
💡 Pro Tip: For best results, provide context about your project structure and any specific constraints or preferences you have.

Best Practices

  • ✓ Always review generated code for security vulnerabilities before deploying
  • ✓ Test the testing-library code in a development environment first
  • ✓ Customize the prompt output to match your project's coding standards
  • ✓ Keep your AI assistant's context window in mind for complex requirements
  • ✓ Version control your prompts alongside your code for reproducibility

Frequently Asked Questions

Can I use this testing-library prompt commercially?

Yes! All prompts on Antigravity AI Directory are free to use for both personal and commercial projects. No attribution required, though it's always appreciated.

Which AI assistants work best with this prompt?

This prompt works excellently with Claude, ChatGPT, Cursor, GitHub Copilot, and other modern AI coding assistants. For best results, use models with large context windows.

How do I customize this prompt for my specific needs?

You can modify the prompt by adding specific requirements, constraints, or preferences. For testing-library projects, consider mentioning your framework version, coding style, and any specific libraries you're using.

Related Prompts

💬 Comments

Loading comments...