by DuetG
0 (0 reviews)
DuetG AI Connector
Connect WordPress AI Client to any OpenAI-compatible AI API provider.
Compatible with WP 7.0
v0.3.1
Current Version v0.3.1
Updated 1 day ago
Last Update on 15 Apr, 2026
Refreshed 6 hours ago
Last Refreshed on
Rank
#45,803
—
No change
Active Installs
0+
—
No change
KW Avg Position
N/A
—
No change
Downloads
32
+8 today
Support Resolved
0%
—
No change
Rating
0%
Review 0 out of 5
0
(0 reviews)
Next Milestone 10
0+
10+
14,988
Ranks to Climb
-
Growth Needed
8,000,000
Active Installs
Pro
Unlock Exact Install Count
See the precise estimated active installs for this plugin, calculated from real-time ranking data.
- Exact install estimates within tiers
- Track install growth over time
- Milestone progress predictions
Need 6 more installs to reach 10+
Rank Changes
Current
#45,803
Change
Best
#
Downloads Growth
Downloads
Growth
Peak
Upgrade to Pro
Unlock 30-day, 90-day, and yearly download history charts with a Pro subscription.
Upgrade NowReviews & Ratings
0.0
0 reviews
Overall
0%
5
0
(0%)
4
0
(0%)
3
0
(0%)
2
0
(0%)
1
0
(0%)
Tracked Keywords
Showing 0 of 0| Keyword | Position | Change | Type | Updated |
|---|---|---|---|---|
| No keyword data available yet. | ||||
Unlock Keyword Analytics
Track keyword rankings, search positions, and discover new ranking opportunities with a Pro subscription.
- Full keyword position tracking
- Historical ranking data
- Competitor keyword analysis
Track This Plugin
Get detailed analytics, keyword tracking, and position alerts delivered to your inbox.
Start Tracking FreePlugin Details
- Version
- 0.3.1
- Last Updated
- Apr 15, 2026
- Requires WP
- 7.0+
- Tested Up To
- 7.0
- PHP Version
- 7.4 or higher
- Author
- DuetG
Support & Rating
- Rating
- ☆ ☆ ☆ ☆ ☆ 0
- Reviews
- 0
- Support Threads
- 0
- Resolved
- 0%
Keywords
Upgrade to Pro
Unlock keyword rankings, search positions, and detailed analytics with a Pro subscription.
Upgrade NowFrequently Asked Questions
Common questions about DuetG AI Connector
define('DUETGAICON_DEBUG', true);
When enabled, debug information will be written to your server's debug log (usually wp-content/debug.log). This includes:
* Request/response details for AI API calls
* Provider registration status
* Model handler information
Note: Disable debug logging in production environments to avoid performance impact and log file growth.
Does this plugin work without WordPress 7.0?
No, this plugin requires WordPress 7.0 or higher because it uses the built-in Connectors API for API key management.
Why do the number of suggestions and notes sometimes not match?
When using Review Notes, you may notice that the number of suggestions returned by the AI does not exactly match the number of notes displayed in the editor.
This is expected behavior and has two causes:
Multi-category suggestions: Some AI models return a single suggestion that applies to multiple review categories (e.g., review_type: "seo, accessibility"). The plugin preserves these as-is, so one suggestion may appear under multiple note categories in WordPress AI Client.
Model response format: The AI model controls the number of suggestions it returns, and WordPress AI Client determines how to display and categorize them. The plugin correctly forwards the model's response without modifying the count.
When enabled, debug information will be written to your server's debug log (usually wp-content/debug.log). This includes:
* Request/response details for AI API calls
* Provider registration status
* Model handler information
Note: Disable debug logging in production environments to avoid performance impact and log file growth.
Does this plugin work without WordPress 7.0?
No, this plugin requires WordPress 7.0 or higher because it uses the built-in Connectors API for API key management.
Why do the number of suggestions and notes sometimes not match?
When using Review Notes, you may notice that the number of suggestions returned by the AI does not exactly match the number of notes displayed in the editor.
This is expected behavior and has two causes:
Multi-category suggestions: Some AI models return a single suggestion that applies to multiple review categories (e.g., review_type: "seo, accessibility"). The plugin preserves these as-is, so one suggestion may appear under multiple note categories in WordPress AI Client.
Model response format: The AI model controls the number of suggestions it returns, and WordPress AI Client determines how to display and categorize them. The plugin correctly forwards the model's response without modifying the count.
Ollama (local): http://localhost:11434/v1
LM Studio (local): http://localhost:1234/v1
MiniMax: https://api.minimax.io/v1
Moonshot: https://api.moonshot.ai/v1
DeepSeek: https://api.deepseek.com/v1
SiliconFlow: https://api.siliconflow.cn/v1
Other providers: Check their documentation
Do I need an API key?
Some providers require an API key. For local installations (like Ollama) that don't require authentication, you can enter any dummy string (e.g., "not-required") as the API key.
Why do local reasoning/thinking models sometimes timeout?
Local reasoning models (like Gemma 4, QwQ, etc.) running on Ollama generate long "thinking" chains before producing their final answer. This process can take 30-60 seconds or more, which can trigger cURL's low speed limit timeout (30 seconds by default).
Cloud models generally work well - most cloud API providers (DeepSeek, MiniMax, Moonshot, etc.) respond quickly without timeout issues. If a cloud model frequently times out, it may have unusually long thinking chains - try switching to a different model.
Recommended solutions for local models:
Use non-reasoning models for local AI features. For Ollama, models like qwen2.5:7b, llama3.2:3b, or phi3 work well without the timeout issue.
Configure Ollama to keep models loaded:
bash
export OLLAMA_KEEP_ALIVE=-1 # Keep model in memory
If using reasoning models, be aware that WordPress AI features may be slower or timeout. The thinking behavior is controlled by the model, not by the plugin.
How do I use a local AI provider (like Ollama or LM Studio)?
By default, WordPress blocks requests to localhost and private IP addresses for security (SSRF protection). If you're using a local AI provider, you can disable this protection by adding to your wp-config.php:
define('DUETGAICON_ALLOW_LOCAL_URLS', true);
Warning: Disabling SSRF protection allows requests to private/local IPs. Only enable this if you trust your local AI provider and your server is not directly accessible from the internet.
This setting applies to both text and image models when using local AI providers.
Tip: When DUETGAICON_ALLOW_LOCAL_URLS is enabled, a Network Connectivity Test tool appears on the Test AI page (Tools > Test AI). You can use it to verify that your WordPress server can reach your local AI provider before running actual AI feature tests. This is especially useful for debugging connection issues with local Ollama or LM Studio installations.
How do I use this in my code?
use WordPress\AiClient\AiClient;
$registry = AiClient::defaultRegistry();
// Text Generation
$model = $registry->getProviderModel('custom_text', 'gpt-4');
$result = $model->generateTextResult([
new \WordPress\AiClient\Messages\DTO\UserMessage([
new \WordPress\AiClient\Messages\DTO\MessagePart('Your prompt here')
])
]);
echo $result->toText();
// Image Generation
$model = $registry->getProviderModel('custom_image', 'dall-e-3');
$result = $model->generateImageResult([
new \WordPress\AiClient\Messages\DTO\UserMessage([
new \WordPress\AiClient\Messages\DTO\MessagePart('Your prompt here')
])
]);
$files = $result->toImageFiles();
LM Studio (local): http://localhost:1234/v1
MiniMax: https://api.minimax.io/v1
Moonshot: https://api.moonshot.ai/v1
DeepSeek: https://api.deepseek.com/v1
SiliconFlow: https://api.siliconflow.cn/v1
Other providers: Check their documentation
Do I need an API key?
Some providers require an API key. For local installations (like Ollama) that don't require authentication, you can enter any dummy string (e.g., "not-required") as the API key.
Why do local reasoning/thinking models sometimes timeout?
Local reasoning models (like Gemma 4, QwQ, etc.) running on Ollama generate long "thinking" chains before producing their final answer. This process can take 30-60 seconds or more, which can trigger cURL's low speed limit timeout (30 seconds by default).
Cloud models generally work well - most cloud API providers (DeepSeek, MiniMax, Moonshot, etc.) respond quickly without timeout issues. If a cloud model frequently times out, it may have unusually long thinking chains - try switching to a different model.
Recommended solutions for local models:
Use non-reasoning models for local AI features. For Ollama, models like qwen2.5:7b, llama3.2:3b, or phi3 work well without the timeout issue.
Configure Ollama to keep models loaded:
bash
export OLLAMA_KEEP_ALIVE=-1 # Keep model in memory
If using reasoning models, be aware that WordPress AI features may be slower or timeout. The thinking behavior is controlled by the model, not by the plugin.
How do I use a local AI provider (like Ollama or LM Studio)?
By default, WordPress blocks requests to localhost and private IP addresses for security (SSRF protection). If you're using a local AI provider, you can disable this protection by adding to your wp-config.php:
define('DUETGAICON_ALLOW_LOCAL_URLS', true);
Warning: Disabling SSRF protection allows requests to private/local IPs. Only enable this if you trust your local AI provider and your server is not directly accessible from the internet.
This setting applies to both text and image models when using local AI providers.
Tip: When DUETGAICON_ALLOW_LOCAL_URLS is enabled, a Network Connectivity Test tool appears on the Test AI page (Tools > Test AI). You can use it to verify that your WordPress server can reach your local AI provider before running actual AI feature tests. This is especially useful for debugging connection issues with local Ollama or LM Studio installations.
How do I use this in my code?
use WordPress\AiClient\AiClient;
$registry = AiClient::defaultRegistry();
// Text Generation
$model = $registry->getProviderModel('custom_text', 'gpt-4');
$result = $model->generateTextResult([
new \WordPress\AiClient\Messages\DTO\UserMessage([
new \WordPress\AiClient\Messages\DTO\MessagePart('Your prompt here')
])
]);
echo $result->toText();
// Image Generation
$model = $registry->getProviderModel('custom_image', 'dall-e-3');
$result = $model->generateImageResult([
new \WordPress\AiClient\Messages\DTO\UserMessage([
new \WordPress\AiClient\Messages\DTO\MessagePart('Your prompt here')
])
]);
$files = $result->toImageFiles();