gemini
Gemini Command¶
The gemini
command generates content using Google's Gemini AI models, providing powerful AI-assisted capabilities for code generation, explanations, and more.
Syntax¶
Arguments¶
Argument | Description | Required | Example |
---|---|---|---|
PROMPT |
The text prompt for Gemini to respond to | Yes | "Explain how promises work in JavaScript" |
Options¶
Option | Description | Default | Example |
---|---|---|---|
--model |
The Gemini model to use | gemini-1.5-pro |
--model gemini-2.0-flash |
--temperature |
Sampling temperature (0.0-1.0) | 0.7 |
--temperature 0.9 |
--max-tokens |
Maximum tokens to generate | Based on model | --max-tokens 1000 |
--system |
System instruction for guiding model behavior | None | --system "You are a Python expert" |
--format |
Output format (plain, markdown, json, rich) | rich |
--format markdown |
--help |
Show command help | - | --help |
Available Models¶
Model | Description | Best For |
---|---|---|
gemini-1.5-pro |
Powerful general-purpose model | Default for most use cases |
gemini-2.0-pro-exp |
Experimental pro model | Complex reasoning, cutting-edge |
gemini-2.0-flash |
Faster, more efficient model | Quick responses, simpler tasks |
gemini-2.0-flash-exp |
Experimental flash model | Testing latest capabilities |
gemini-2.0-flash-thinking-exp |
Enhanced thinking capabilities | Step-by-step reasoning |
Configuration¶
Before using the gemini
command, you need to set up your Google Gemini API key:
You can obtain an API key from Google AI Studio.
Examples¶
Basic Usage¶
Generate a simple response:
Code Generation¶
Generate a Python function:
Using Different Models¶
Use a specific model:
Controlling Temperature¶
Lower temperature for more focused, deterministic responses:
Higher temperature for more creative, varied responses:
Using System Instructions¶
Guide the model's behavior with system instructions:
cursor-utils gemini --system "You are a security expert specialized in code review" "Review this code for vulnerabilities: ..."
cursor-utils gemini --system "You are a technical writer who explains complex concepts clearly" "Explain OAuth 2.0 authorization flow"
Different Output Formats¶
Output in markdown format:
Output in JSON format:
Limiting Output Length¶
Specify maximum token generation:
Use Cases¶
Code Development¶
-
Generate function implementations
-
Write test cases
-
Debug code
Documentation¶
-
Generate technical documentation
-
Create usage examples
Learning¶
-
Explain concepts
-
Compare technologies
Advanced Techniques¶
Chaining Prompts¶
Building on previous responses for iterative development:
# First, generate a basic implementation
cursor-utils gemini "Write a Python function to convert Celsius to Fahrenheit" > temperature.py
# Then, improve the implementation
cursor-utils gemini "Improve this function to include error handling and type checking: $(cat temperature.py)" > temperature_improved.py
Structured Output¶
Request structured data for programmatic use:
cursor-utils gemini --format json "Convert this email to JSON with fields for sender, date, subject, and body: ..."
Redirecting Output¶
Save output to files:
cursor-utils gemini --format markdown "Write documentation for GraphQL mutations" > graphql-mutations.md
Best Practices¶
-
Be Specific: More specific prompts yield better results
-
Consider Model Selection: Choose the appropriate model for your task
- Use
gemini-1.5-pro
for complex reasoning, detailed responses -
Use
gemini-2.0-flash
for quick, simple tasks -
Tune Temperature: Adjust based on your need for creativity vs. determinism
- Lower temperature (0.1-0.4) for factual, consistent responses
-
Higher temperature (0.7-0.9) for creative, varied responses
-
Use System Instructions: Guide the model's behavior using system instructions
-
Format for Readability: Use markdown for documentation, rich for interactive use
Troubleshooting¶
API Key Issues¶
If you receive authentication errors:
Verify your API key is correctly set:
If it's missing or incorrect, set it:
Model Availability¶
If you receive an error about model availability:
Try using a different model:
Rate Limiting¶
If you encounter rate limiting:
Wait a few minutes and try again, or check your API usage limits.
Timeout Issues¶
For complex prompts that time out:
Try simplifying your prompt or breaking it into smaller parts.