kafka-architecture

by openclaw

Apache Kafka architecture expert for cluster design, capacity planning, and high availability. Use when designing Kafka clusters, choosing partition strategies, or sizing brokers for production workloads.

Scanned
Risk
Low Risk
Status
warning
Findings
1
Last Scanned
2/11/2026

Discussion

Sign in to join the discussion.

No comments yet. Be the first to share your thoughts.

Scan Report

Duration
269.3s
Rules checked
147
Scanned at
2/11/2026, 7:24:24 PM

Scanners4/5 ran

clawguard-rules
0 findings5ms
No findings — all checks passed.
View logs
clawguard-rules5ms
1[2026-02-11T19:19:55.226Z] Running @yourclaw/clawguard-rules pattern matcher
2Scanning: /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/SKILL.md
3Content length: 15143 chars
4Patterns matched: 0
5✓ Completed in 5ms
gitleaks
0 findings157821ms
No findings — all checks passed.
View logs
gitleaks157821ms
1[2026-02-11T19:22:33.047Z] $ gitleaks detect --source /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture --report-format json --report-path /dev/stdout --no-git
2
3⚠ stderr output:
4
5 │╲
6 │ ○
7 ○ ░
8 ░ gitleaks
9
107:22PM FTL Report path is not writable: /dev/stdout error="open /dev/stdout: no such device or address"
11
12Process exited with code 1
13✓ Completed in 157821ms
semgrep
0 findings269307ms
No findings — all checks passed.
View logs
semgrep269307ms
1[2026-02-11T19:24:24.536Z] $ semgrep scan --json --quiet --config auto /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture
2{"version":"1.151.0","results":[],"errors":[],"paths":{"scanned":["/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/SKILL.md","/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/_meta.json"]},"time":{"rules":[],"rules_parse_time":29.656734943389893,"profiling_times":{"config_time":39.04051756858826,"core_time":38.87340188026428,"ignores_time":0.02653980255126953,"total_time":78.07825183868408},"parsing_time":{"total_time":0.0,"per_file_time":{"mean":0.0,"std_dev":0.0},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_files":[]},"scanning_time":{"total_time":0.3606758117675781,"per_file_time":{"mean":0.07213516235351562,"std_dev":0.003900456684582423},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_files":[]},"matching_time":{"total_time":0.0,"per_file_and_rule_time":{"mean":0.0,"std_dev":0.0},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_rules_on_files":[]},"tainting_time":{"total_time":0.0,"per_def_and_rule_time":{"mean":0.0,"std_dev":0.0},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_rules_on_defs":[]},"fixpoint_timeouts":[],"prefiltering":{"project_level_time":0.0,"file_level_time":0.0,"rules_with_project_prefilters_ratio":0.0,"rules_with_file_prefilters_ratio":0.97,"rules_selected_ratio":0.04,"rules_matched_ratio":0.04},"targets":[],"total_bytes":0,"max_memory_bytes":1148638464},"engine_requested":"OSS","skipped_rules":[],"profiling_results":[]}
3
4Process exited with code 0
5✓ Completed in 269307ms
mcp-scan
1 finding203111ms
MCP-W004The MCP server is not in our registry.
View logs
mcp-scan203111ms
1[2026-02-11T19:23:18.343Z] $ mcp-scan --skills /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture --json
2{
3 "/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov": {
4 "client": "not-available",
5 "path": "/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov",
6 "servers": [
7 {
8 "name": "sw-kafka-architecture",
9 "server": {
10 "path": "/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture",
11 "type": "skill"
12 },
13 "signature": {
14 "metadata": {
15 "meta": null,
16 "protocolVersion": "built-in",
17 "capabilities": {
18 "experimental": null,
19 "logging": null,
20 "prompts": null,
21 "resources": null,
22 "tools": {
23 "listChanged": false
24 },
25 "completions": null,
26 "tasks": null
27 },
28 "serverInfo": {
29 "name": "kafka-architecture",
30 "title": null,
31 "version": "skills",
32 "websiteUrl": null,
33 "icons": null
34 },
35 "instructions": "Apache Kafka architecture expert for cluster design, capacity planning, and high availability. Use when designing Kafka clusters, choosing partition strategies, or sizing brokers for production workloads.",
36 "prompts": {
37 "listChanged": false
38 },
39 "resources": {
40 "subscribe": null,
41 "listChanged": false
42 }
43 },
44 "prompts": [
45 {
46 "name": "SKILL.md",
47 "title": null,
48 "description": "\n\n# Kafka Architecture & Design Expert\n\nComprehensive knowledge of Apache Kafka architecture patterns, cluster design principles, and production best practices for building resilient, scalable event streaming platforms.\n\n## Core Architecture Concepts\n\n### Kafka Cluster Components\n\n**Brokers**:\n- Individual Kafka servers that store and serve data\n- Each broker handles thousands of partitions\n- Typical: 3-10 brokers per cluster (small), 10-100+ (large enterprises)\n\n**Controller**:\n- One broker elected as controller (via KRaft or ZooKeeper)\n- Manages partition leaders and replica assignments\n- Failure triggers automatic re-election\n\n**Topics**:\n- Logical channels for message streams\n- Divided into partitions for parallelism\n- Can have different retention policies per topic\n\n**Partitions**:\n- Ordered, immutable sequence of records\n- Unit of parallelism (1 partition = 1 consumer in a group)\n- Distributed across brokers for load balancing\n\n**Replicas**:\n- Copies of partitions across multiple brokers\n- 1 leader replica (serves reads/writes)\n- N-1 follower replicas (replication only)\n- In-Sync Replicas (ISR): Followers caught up with leader\n\n### KRaft vs ZooKeeper Mode\n\n**KRaft Mode** (Recommended, Kafka 3.3+):\n```yaml\nCluster Metadata:\n - Stored in Kafka itself (no external ZooKeeper)\n - Metadata topic: __cluster_metadata\n - Controller quorum (3 or 5 nodes)\n - Faster failover (<1s vs 10-30s)\n - Simplified operations\n```\n\n**ZooKeeper Mode** (Legacy, deprecated in 4.0):\n```yaml\nExternal Coordination:\n - Requires separate ZooKeeper ensemble (3-5 nodes)\n - Stores cluster metadata, configs, ACLs\n - Slower failover (10-30 seconds)\n - More complex to operate\n```\n\n**Migration**: ZooKeeper \u2192 KRaft migration supported in Kafka 3.6+\n\n## Cluster Sizing Guidelines\n\n### Small Cluster (Development/Testing)\n\n```yaml\nConfiguration:\n Brokers: 3\n Partitions per broker: ~100-500\n Total partitions: 300-1500\n Replication factor: 3\n Hardware:\n - CPU: 4-8 cores\n - RAM: 8-16 GB\n - Disk: 500 GB - 1 TB SSD\n - Network: 1 Gbps\n\nUse Cases:\n - Development environments\n - Low-volume production (<10 MB/s)\n - Proof of concepts\n - Single datacenter\n\nExample Workload:\n - 50 topics\n - 5-10 partitions per topic\n - 1 million messages/day\n - 7-day retention\n```\n\n### Medium Cluster (Standard Production)\n\n```yaml\nConfiguration:\n Brokers: 6-12\n Partitions per broker: 500-2000\n Total partitions: 3K-24K\n Replication factor: 3\n Hardware:\n - CPU: 16-32 cores\n - RAM: 64-128 GB\n - Disk: 2-8 TB NVMe SSD\n - Network: 10 Gbps\n\nUse Cases:\n - Standard production workloads\n - Multi-team environments\n - Regional deployments\n - Up to 500 MB/s throughput\n\nExample Workload:\n - 200-500 topics\n - 10-50 partitions per topic\n - 100 million messages/day\n - 30-day retention\n```\n\n### Large Cluster (High-Scale Production)\n\n```yaml\nConfiguration:\n Brokers: 20-100+\n Partitions per broker: 2000-4000\n Total partitions: 40K-400K+\n Replication factor: 3\n Hardware:\n - CPU: 32-64 cores\n - RAM: 128-256 GB\n - Disk: 8-20 TB NVMe SSD\n - Network: 25-100 Gbps\n\nUse Cases:\n - Large enterprises\n - Multi-region deployments\n - Event-driven architectures\n - 1+ GB/s throughput\n\nExample Workload:\n - 1000+ topics\n - 50-200 partitions per topic\n - 1+ billion messages/day\n - 90-365 day retention\n```\n\n### Kafka Streams / Exactly-Once Semantics (EOS) Clusters\n\n```yaml\nConfiguration:\n Brokers: 6-12+ (same as standard, but more control plane load)\n Partitions per broker: 500-1500 (fewer due to transaction overhead)\n Total partitions: 3K-18K\n Replication factor: 3\n Hardware:\n - CPU: 16-32 cores (more CPU for transactions)\n - RAM: 64-128 GB\n - Disk: 4-12 TB NVMe SSD (more for transaction logs)\n - Network: 10-25 Gbps\n\nSpecial Considerations:\n - More brokers due to transaction coordinator load\n - Lower partition count per broker (transactions = more overhead)\n - Higher disk IOPS for transaction logs\n - min.insync.replicas=2 mandatory for EOS\n - acks=all required for producers\n\nUse Cases:\n - Stream processing with exactly-once guarantees\n - Financial transactions\n - Event sourcing with strict ordering\n - Multi-step workflows requiring atomicity\n```\n\n## Partitioning Strategy\n\n### How Many Partitions?\n\n**Formula**:\n```\nPartitions = max(\n Target Throughput / Single Partition Throughput,\n Number of Consumers (for parallelism),\n Future Growth Factor (2-3x)\n)\n\nSingle Partition Limits:\n - Write throughput: ~10-50 MB/s\n - Read throughput: ~30-100 MB/s\n - Message rate: ~10K-100K msg/s\n```\n\n**Examples**:\n\n**High Throughput Topic** (Logs, Events):\n```yaml\nRequirements:\n - Write: 200 MB/s\n - Read: 500 MB/s (multiple consumers)\n - Expected growth: 3x in 1 year\n\nCalculation:\n Write partitions: 200 MB/s \u00f7 20 MB/s = 10\n Read partitions: 500 MB/s \u00f7 40 MB/s = 13\n Growth factor: 13 \u00d7 3 = 39\n\nRecommendation: 40-50 partitions\n```\n\n**Low-Latency Topic** (Commands, Requests):\n```yaml\nRequirements:\n - Write: 5 MB/s\n - Read: 10 MB/s\n - Latency: <10ms p99\n - Order preservation: By user ID\n\nCalculation:\n Throughput partitions: 5 MB/s \u00f7 20 MB/s = 1\n Parallelism: 4 (for redundancy)\n\nRecommendation: 4-6 partitions (keyed by user ID)\n```\n\n**Dead Letter Queue**:\n```yaml\nRecommendation: 1-3 partitions\nReason: Low volume, order less important\n```\n\n### Partition Key Selection\n\n**Good Keys** (High Cardinality, Even Distribution):\n```yaml\n\u2705 User ID (UUIDs):\n - Millions of unique values\n - Even distribution\n - Example: \"user-123e4567-e89b-12d3-a456-426614174000\"\n\n\u2705 Device ID (IoT):\n - Unique per device\n - Natural sharding\n - Example: \"device-sensor-001-zone-a\"\n\n\u2705 Order ID (E-commerce):\n - Unique per transaction\n - Even temporal distribution\n - Example: \"order-2024-11-15-abc123\"\n```\n\n**Bad Keys** (Low Cardinality, Hotspots):\n```yaml\n\u274c Country Code:\n - Only ~200 values\n - Uneven (US, CN >> others)\n - Creates partition hotspots\n\n\u274c Boolean Flags:\n - Only 2 values (true/false)\n - Severe imbalance\n\n\u274c Date (YYYY-MM-DD):\n - All today's traffic \u2192 1 partition\n - Temporal hotspot\n```\n\n**Compound Keys** (Best of Both):\n```yaml\n\u2705 Country + User ID:\n - Partition by country for locality\n - Sub-partition by user for distribution\n - Example: \"US:user-123\" \u2192 hash(\"US:user-123\")\n\n\u2705 Tenant + Event Type + Timestamp:\n - Multi-tenant isolation\n - Event type grouping\n - Temporal ordering\n```\n\n## Replication & High Availability\n\n### Replication Factor Guidelines\n\n```yaml\nDevelopment:\n Replication Factor: 1\n Reason: Fast, no durability needed\n\nProduction (Standard):\n Replication Factor: 3\n Reason: Balance durability vs cost\n Tolerates: 2 broker failures (with min.insync.replicas=2)\n\nProduction (Critical):\n Replication Factor: 5\n Reason: Maximum durability\n Tolerates: 4 broker failures (with min.insync.replicas=3)\n Use Cases: Financial transactions, audit logs\n\nMulti-Datacenter:\n Replication Factor: 3 per DC (6 total)\n Reason: DC-level fault tolerance\n Requires: MirrorMaker 2 or Confluent Replicator\n```\n\n### min.insync.replicas\n\n**Configuration**:\n```yaml\nmin.insync.replicas=2:\n - At least 2 replicas must acknowledge writes\n - Typical for replication.factor=3\n - Prevents data loss if 1 broker fails\n\nmin.insync.replicas=1:\n - Only leader must acknowledge (dangerous!)\n - Use only for non-critical topics\n\nmin.insync.replicas=3:\n - At least 3 replicas must acknowledge\n - For replication.factor=5 (critical systems)\n```\n\n**Rule**: `min.insync.replicas \u2264 replication.factor - 1` (to allow 1 replica failure)\n\n### Rack Awareness\n\n```yaml\nConfiguration:\n broker.rack=rack1 # Broker 1\n broker.rack=rack2 # Broker 2\n broker.rack=rack3 # Broker 3\n\nBenefit:\n - Replicas spread across racks\n - Survives rack-level failures (power, network)\n - Example: Topic with RF=3 \u2192 1 replica per rack\n\nPlacement:\n Leader: rack1\n Follower 1: rack2\n Follower 2: rack3\n```\n\n## Retention Strategies\n\n### Time-Based Retention\n\n```yaml\nShort-Term (Events, Logs):\n retention.ms: 86400000 # 1 day\n Use Cases: Real-time analytics, monitoring\n\nMedium-Term (Transactions):\n retention.ms: 604800000 # 7 days\n Use Cases: Standard business events\n\nLong-Term (Audit, Compliance):\n retention.ms: 31536000000 # 365 days\n Use Cases: Regulatory requirements, event sourcing\n\nInfinite (Event Sourcing):\n retention.ms: -1 # Forever\n cleanup.policy: compact\n Use Cases: Source of truth, state rebuilding\n```\n\n### Size-Based Retention\n\n```yaml\nretention.bytes: 10737418240 # 10 GB per partition\n\nCombined (Time OR Size):\n retention.ms: 604800000 # 7 days\n retention.bytes: 107374182400 # 100 GB\n # Whichever limit is reached first\n```\n\n### Compaction (Log Compaction)\n\n```yaml\ncleanup.policy: compact\n\nHow It Works:\n - Keeps only latest value per key\n - Deletes old versions\n - Preserves full history initially, compacts later\n\nUse Cases:\n - Database changelogs (CDC)\n - User profile updates\n - Configuration management\n - State stores\n\nExample:\n Before Compaction:\n user:123 \u2192 {name: \"Alice\", v:1}\n user:123 \u2192 {name: \"Alice\", v:2, email: \"alice@ex.com\"}\n user:123 \u2192 {name: \"Alice A.\", v:3}\n\n After Compaction:\n user:123 \u2192 {name: \"Alice A.\", v:3} # Latest only\n```\n\n## Performance Optimization\n\n### Broker Configuration\n\n```yaml\n# Network threads (handle client connections)\nnum.network.threads: 8 # Increase for high connection count\n\n# I/O threads (disk operations)\nnum.io.threads: 16 # Set to number of disks \u00d7 2\n\n# Replica fetcher threads\nnum.replica.fetchers: 4 # Increase for many partitions\n\n# Socket buffer sizes\nsocket.send.buffer.bytes: 1048576 # 1 MB\nsocket.receive.buffer.bytes: 1048576 # 1 MB\n\n# Log flush (default: OS handles flushing)\nlog.flush.interval.messages: 10000 # Flush every 10K messages\nlog.flush.interval.ms: 1000 # Or every 1 second\n```\n\n### Producer Optimization\n\n```yaml\nHigh Throughput:\n batch.size: 65536 # 64 KB\n linger.ms: 100 # Wait 100ms for batching\n compression.type: lz4 # Fast compression\n acks: 1 # Leader only\n\nLow Latency:\n batch.size: 16384 # 16 KB (default)\n linger.ms: 0 # Send immediately\n compression.type: none\n acks: 1\n\nDurability (Exactly-Once):\n batch.size: 16384\n linger.ms: 10\n compression.type: lz4\n acks: all\n enable.idempotence: true\n transactional.id: \"producer-1\"\n```\n\n### Consumer Optimization\n\n```yaml\nHigh Throughput:\n fetch.min.bytes: 1048576 # 1 MB\n fetch.max.wait.ms: 500 # Wait 500ms to accumulate\n\nLow Latency:\n fetch.min.bytes: 1 # Immediate fetch\n fetch.max.wait.ms: 100 # Short wait\n\nMax Parallelism:\n # Deploy consumers = number of partitions\n # More consumers than partitions = idle consumers\n```\n\n## Multi-Datacenter Patterns\n\n### Active-Passive (Disaster Recovery)\n\n```yaml\nArchitecture:\n Primary DC: Full Kafka cluster\n Secondary DC: Replica cluster (MirrorMaker 2)\n\nConfiguration:\n - Producers \u2192 Primary only\n - Consumers \u2192 Primary only\n - MirrorMaker 2: Primary \u2192 Secondary (async replication)\n\nFailover:\n 1. Detect primary failure\n 2. Switch producers/consumers to secondary\n 3. Promote secondary to primary\n\nRecovery Time: 5-30 minutes (manual)\nData Loss: Potential (async replication lag)\n```\n\n### Active-Active (Geo-Replication)\n\n```yaml\nArchitecture:\n DC1: Kafka cluster (region A)\n DC2: Kafka cluster (region B)\n Bidirectional replication via MirrorMaker 2\n\nConfiguration:\n - Producers \u2192 Nearest DC\n - Consumers \u2192 Nearest DC or both\n - Conflict resolution: Last-write-wins or custom\n\nChallenges:\n - Duplicate messages (at-least-once delivery)\n - Ordering across DCs not guaranteed\n - Circular replication prevention\n\nUse Cases:\n - Global applications\n - Regional compliance (GDPR)\n - Load distribution\n```\n\n### Stretch Cluster (Synchronous Replication)\n\n```yaml\nArchitecture:\n Single Kafka cluster spanning 2 DCs\n Rack awareness: DC1 = rack1, DC2 = rack2\n\nConfiguration:\n min.insync.replicas: 2\n replication.factor: 4 (2 per DC)\n acks: all\n\nRequirements:\n - Low latency between DCs (<10ms)\n - High bandwidth link (10+ Gbps)\n - Dedicated fiber\n\nTrade-offs:\n Pros: Synchronous replication, zero data loss\n Cons: Latency penalty, network dependency\n```\n\n## Monitoring & Observability\n\n### Key Metrics\n\n**Broker Metrics**:\n```yaml\nUnderReplicatedPartitions:\n Alert: > 0 for > 5 minutes\n Indicates: Replica lag, broker failure\n\nOfflinePartitionsCount:\n Alert: > 0\n Indicates: No leader elected (critical!)\n\nActiveControllerCount:\n Alert: != 1 (should be exactly 1)\n Indicates: Split brain or no controller\n\nRequestHandlerAvgIdlePercent:\n Alert: < 20%\n Indicates: Broker CPU saturation\n```\n\n**Topic Metrics**:\n```yaml\nMessagesInPerSec:\n Monitor: Throughput trends\n Alert: Sudden drops (producer failure)\n\nBytesInPerSec / BytesOutPerSec:\n Monitor: Network utilization\n Alert: Approaching NIC limits\n\nRecordsLagMax (Consumer):\n Alert: > 10000 or growing\n Indicates: Consumer can't keep up\n```\n\n**Disk Metrics**:\n```yaml\nLogSegmentSize:\n Monitor: Disk usage trends\n Alert: > 80% capacity\n\nLogFlushRateAndTimeMs:\n Monitor: Disk write latency\n Alert: > 100ms p99 (slow disk)\n```\n\n## Security Patterns\n\n### Authentication & Authorization\n\n```yaml\nSASL/SCRAM-SHA-512:\n - Industry standard\n - User/password authentication\n - Stored in ZooKeeper/KRaft\n\nACLs (Access Control Lists):\n - Per-topic, per-group permissions\n - Operations: READ, WRITE, CREATE, DELETE, ALTER\n - Example:\n bin/kafka-acls.sh --add \\\n --allow-principal User:alice \\\n --operation READ \\\n --topic orders\n\nmTLS (Mutual TLS):\n - Certificate-based auth\n - Strong cryptographic identity\n - Best for service-to-service\n```\n\n## Integration with SpecWeave\n\n**Automatic Architecture Detection**:\n```typescript\nimport { ClusterSizingCalculator } from './lib/utils/sizing';\n\nconst calculator = new ClusterSizingCalculator();\nconst recommendation = calculator.calculate({\n throughputMBps: 200,\n retentionDays: 30,\n replicationFactor: 3,\n topicCount: 100\n});\n\nconsole.log(recommendation);\n// {\n// brokers: 8,\n// partitionsPerBroker: 1500,\n// diskPerBroker: 6000 GB,\n// ramPerBroker: 64 GB\n// }\n```\n\n**SpecWeave Commands**:\n- `/sw-kafka:deploy` - Validates cluster sizing before deployment\n- `/sw-kafka:monitor-setup` - Configures metrics for key indicators\n\n## Related Skills\n\n- `/sw-kafka:kafka-mcp-integration` - MCP server setup\n- `/sw-kafka:kafka-cli-tools` - CLI operations\n\n## External Links\n\n- [Kafka Documentation - Architecture](https://kafka.apache.org/documentation/#design)\n- [Confluent - Kafka Sizing](https://www.confluent.io/blog/how-to-choose-the-number-of-topics-partitions-in-a-kafka-cluster/)\n- [KRaft Mode Overview](https://kafka.apache.org/documentation/#kraft)\n- [LinkedIn Engineering - Kafka at Scale](https://engineering.linkedin.com/kafka/running-kafka-scale)\n",
49 "arguments": [],
50 "icons": null,
51 "meta": null
52 }
53 ],
54 "resources": [
55 {
56 "name": "_meta.json",
57 "title": null,
58 "uri": "skill://_meta.json",
59 "description": "{\n \"owner\": \"anton-abyzov\",\n \"slug\": \"sw-kafka-architecture\",\n \"displayName\": \"Kafka Architecture\",\n \"latest\": {\n \"version\": \"1.0.0\",\n \"publishedAt\": 1770713719529,\n \"commit\": \"https://github.com/openclaw/skills/commit/5f0cffbc5691b7258192ee0fd32dd54822a334ba\"\n },\n \"history\": []\n}\n",
60 "mimeType": null,
61 "size": null,
62 "icons": null,
63 "annotations": null,
64 "meta": null
65 }
66 ],
67 "resource_templates": [],
68 "tools": []
69 },
70 "error": null
71 }
72 ],
73 "issues": [
74 {
75 "code": "W004",
76 "message": "The MCP server is not in our registry.",
77 "reference": [
78 0,
79 null
80 ],
81 "extra_data": null
82 }
83 ],
84 "labels": [
85 [
86 {
87 "is_public_sink": 0,
88 "destructive": 0,
89 "untrusted_content": 0,
90 "private_data": 0
91 },
92 {
93 "is_public_sink": 0,
94 "destructive": 0,
95 "untrusted_content": 0,
96 "private_data": 0
97 }
98 ]
99 ],
100 "error": null
101 }
102}
103
104Process exited with code 0
105✓ Completed in 203111ms
npm-audit
No package.json found — skipping npm audit
No package.json found — skipping npm audit
View logs
npm-audit0ms
1No package.json found at /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/package.json
2Skipping npm audit.

Files analyzed

SKILL.md

Rules coverage147 patterns

58
prompt injection
15
secrets
53
malware
21
permissions

Security Findings

LowMCP-W004mcp-scanmcp

The MCP server is not in our registry.

Scan History1 scan

Warningc53fe1e
1 finding
0
critical
0
high
0
medium
1
low
0
info

Scanners4/5 ran

clawguard-rules
0 findings5ms
No findings — all checks passed.
View logs
clawguard-rules5ms
1[2026-02-11T19:19:55.226Z] Running @yourclaw/clawguard-rules pattern matcher
2Scanning: /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/SKILL.md
3Content length: 15143 chars
4Patterns matched: 0
5✓ Completed in 5ms
gitleaks
0 findings157821ms
No findings — all checks passed.
View logs
gitleaks157821ms
1[2026-02-11T19:22:33.047Z] $ gitleaks detect --source /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture --report-format json --report-path /dev/stdout --no-git
2
3⚠ stderr output:
4
5 │╲
6 │ ○
7 ○ ░
8 ░ gitleaks
9
107:22PM FTL Report path is not writable: /dev/stdout error="open /dev/stdout: no such device or address"
11
12Process exited with code 1
13✓ Completed in 157821ms
semgrep
0 findings269307ms
No findings — all checks passed.
View logs
semgrep269307ms
1[2026-02-11T19:24:24.536Z] $ semgrep scan --json --quiet --config auto /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture
2{"version":"1.151.0","results":[],"errors":[],"paths":{"scanned":["/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/SKILL.md","/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/_meta.json"]},"time":{"rules":[],"rules_parse_time":29.656734943389893,"profiling_times":{"config_time":39.04051756858826,"core_time":38.87340188026428,"ignores_time":0.02653980255126953,"total_time":78.07825183868408},"parsing_time":{"total_time":0.0,"per_file_time":{"mean":0.0,"std_dev":0.0},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_files":[]},"scanning_time":{"total_time":0.3606758117675781,"per_file_time":{"mean":0.07213516235351562,"std_dev":0.003900456684582423},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_files":[]},"matching_time":{"total_time":0.0,"per_file_and_rule_time":{"mean":0.0,"std_dev":0.0},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_rules_on_files":[]},"tainting_time":{"total_time":0.0,"per_def_and_rule_time":{"mean":0.0,"std_dev":0.0},"very_slow_stats":{"time_ratio":0.0,"count_ratio":0.0},"very_slow_rules_on_defs":[]},"fixpoint_timeouts":[],"prefiltering":{"project_level_time":0.0,"file_level_time":0.0,"rules_with_project_prefilters_ratio":0.0,"rules_with_file_prefilters_ratio":0.97,"rules_selected_ratio":0.04,"rules_matched_ratio":0.04},"targets":[],"total_bytes":0,"max_memory_bytes":1148638464},"engine_requested":"OSS","skipped_rules":[],"profiling_results":[]}
3
4Process exited with code 0
5✓ Completed in 269307ms
mcp-scan
1 finding203111ms
MCP-W004The MCP server is not in our registry.
View logs
mcp-scan203111ms
1[2026-02-11T19:23:18.343Z] $ mcp-scan --skills /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture --json
2{
3 "/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov": {
4 "client": "not-available",
5 "path": "/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov",
6 "servers": [
7 {
8 "name": "sw-kafka-architecture",
9 "server": {
10 "path": "/tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture",
11 "type": "skill"
12 },
13 "signature": {
14 "metadata": {
15 "meta": null,
16 "protocolVersion": "built-in",
17 "capabilities": {
18 "experimental": null,
19 "logging": null,
20 "prompts": null,
21 "resources": null,
22 "tools": {
23 "listChanged": false
24 },
25 "completions": null,
26 "tasks": null
27 },
28 "serverInfo": {
29 "name": "kafka-architecture",
30 "title": null,
31 "version": "skills",
32 "websiteUrl": null,
33 "icons": null
34 },
35 "instructions": "Apache Kafka architecture expert for cluster design, capacity planning, and high availability. Use when designing Kafka clusters, choosing partition strategies, or sizing brokers for production workloads.",
36 "prompts": {
37 "listChanged": false
38 },
39 "resources": {
40 "subscribe": null,
41 "listChanged": false
42 }
43 },
44 "prompts": [
45 {
46 "name": "SKILL.md",
47 "title": null,
48 "description": "\n\n# Kafka Architecture & Design Expert\n\nComprehensive knowledge of Apache Kafka architecture patterns, cluster design principles, and production best practices for building resilient, scalable event streaming platforms.\n\n## Core Architecture Concepts\n\n### Kafka Cluster Components\n\n**Brokers**:\n- Individual Kafka servers that store and serve data\n- Each broker handles thousands of partitions\n- Typical: 3-10 brokers per cluster (small), 10-100+ (large enterprises)\n\n**Controller**:\n- One broker elected as controller (via KRaft or ZooKeeper)\n- Manages partition leaders and replica assignments\n- Failure triggers automatic re-election\n\n**Topics**:\n- Logical channels for message streams\n- Divided into partitions for parallelism\n- Can have different retention policies per topic\n\n**Partitions**:\n- Ordered, immutable sequence of records\n- Unit of parallelism (1 partition = 1 consumer in a group)\n- Distributed across brokers for load balancing\n\n**Replicas**:\n- Copies of partitions across multiple brokers\n- 1 leader replica (serves reads/writes)\n- N-1 follower replicas (replication only)\n- In-Sync Replicas (ISR): Followers caught up with leader\n\n### KRaft vs ZooKeeper Mode\n\n**KRaft Mode** (Recommended, Kafka 3.3+):\n```yaml\nCluster Metadata:\n - Stored in Kafka itself (no external ZooKeeper)\n - Metadata topic: __cluster_metadata\n - Controller quorum (3 or 5 nodes)\n - Faster failover (<1s vs 10-30s)\n - Simplified operations\n```\n\n**ZooKeeper Mode** (Legacy, deprecated in 4.0):\n```yaml\nExternal Coordination:\n - Requires separate ZooKeeper ensemble (3-5 nodes)\n - Stores cluster metadata, configs, ACLs\n - Slower failover (10-30 seconds)\n - More complex to operate\n```\n\n**Migration**: ZooKeeper \u2192 KRaft migration supported in Kafka 3.6+\n\n## Cluster Sizing Guidelines\n\n### Small Cluster (Development/Testing)\n\n```yaml\nConfiguration:\n Brokers: 3\n Partitions per broker: ~100-500\n Total partitions: 300-1500\n Replication factor: 3\n Hardware:\n - CPU: 4-8 cores\n - RAM: 8-16 GB\n - Disk: 500 GB - 1 TB SSD\n - Network: 1 Gbps\n\nUse Cases:\n - Development environments\n - Low-volume production (<10 MB/s)\n - Proof of concepts\n - Single datacenter\n\nExample Workload:\n - 50 topics\n - 5-10 partitions per topic\n - 1 million messages/day\n - 7-day retention\n```\n\n### Medium Cluster (Standard Production)\n\n```yaml\nConfiguration:\n Brokers: 6-12\n Partitions per broker: 500-2000\n Total partitions: 3K-24K\n Replication factor: 3\n Hardware:\n - CPU: 16-32 cores\n - RAM: 64-128 GB\n - Disk: 2-8 TB NVMe SSD\n - Network: 10 Gbps\n\nUse Cases:\n - Standard production workloads\n - Multi-team environments\n - Regional deployments\n - Up to 500 MB/s throughput\n\nExample Workload:\n - 200-500 topics\n - 10-50 partitions per topic\n - 100 million messages/day\n - 30-day retention\n```\n\n### Large Cluster (High-Scale Production)\n\n```yaml\nConfiguration:\n Brokers: 20-100+\n Partitions per broker: 2000-4000\n Total partitions: 40K-400K+\n Replication factor: 3\n Hardware:\n - CPU: 32-64 cores\n - RAM: 128-256 GB\n - Disk: 8-20 TB NVMe SSD\n - Network: 25-100 Gbps\n\nUse Cases:\n - Large enterprises\n - Multi-region deployments\n - Event-driven architectures\n - 1+ GB/s throughput\n\nExample Workload:\n - 1000+ topics\n - 50-200 partitions per topic\n - 1+ billion messages/day\n - 90-365 day retention\n```\n\n### Kafka Streams / Exactly-Once Semantics (EOS) Clusters\n\n```yaml\nConfiguration:\n Brokers: 6-12+ (same as standard, but more control plane load)\n Partitions per broker: 500-1500 (fewer due to transaction overhead)\n Total partitions: 3K-18K\n Replication factor: 3\n Hardware:\n - CPU: 16-32 cores (more CPU for transactions)\n - RAM: 64-128 GB\n - Disk: 4-12 TB NVMe SSD (more for transaction logs)\n - Network: 10-25 Gbps\n\nSpecial Considerations:\n - More brokers due to transaction coordinator load\n - Lower partition count per broker (transactions = more overhead)\n - Higher disk IOPS for transaction logs\n - min.insync.replicas=2 mandatory for EOS\n - acks=all required for producers\n\nUse Cases:\n - Stream processing with exactly-once guarantees\n - Financial transactions\n - Event sourcing with strict ordering\n - Multi-step workflows requiring atomicity\n```\n\n## Partitioning Strategy\n\n### How Many Partitions?\n\n**Formula**:\n```\nPartitions = max(\n Target Throughput / Single Partition Throughput,\n Number of Consumers (for parallelism),\n Future Growth Factor (2-3x)\n)\n\nSingle Partition Limits:\n - Write throughput: ~10-50 MB/s\n - Read throughput: ~30-100 MB/s\n - Message rate: ~10K-100K msg/s\n```\n\n**Examples**:\n\n**High Throughput Topic** (Logs, Events):\n```yaml\nRequirements:\n - Write: 200 MB/s\n - Read: 500 MB/s (multiple consumers)\n - Expected growth: 3x in 1 year\n\nCalculation:\n Write partitions: 200 MB/s \u00f7 20 MB/s = 10\n Read partitions: 500 MB/s \u00f7 40 MB/s = 13\n Growth factor: 13 \u00d7 3 = 39\n\nRecommendation: 40-50 partitions\n```\n\n**Low-Latency Topic** (Commands, Requests):\n```yaml\nRequirements:\n - Write: 5 MB/s\n - Read: 10 MB/s\n - Latency: <10ms p99\n - Order preservation: By user ID\n\nCalculation:\n Throughput partitions: 5 MB/s \u00f7 20 MB/s = 1\n Parallelism: 4 (for redundancy)\n\nRecommendation: 4-6 partitions (keyed by user ID)\n```\n\n**Dead Letter Queue**:\n```yaml\nRecommendation: 1-3 partitions\nReason: Low volume, order less important\n```\n\n### Partition Key Selection\n\n**Good Keys** (High Cardinality, Even Distribution):\n```yaml\n\u2705 User ID (UUIDs):\n - Millions of unique values\n - Even distribution\n - Example: \"user-123e4567-e89b-12d3-a456-426614174000\"\n\n\u2705 Device ID (IoT):\n - Unique per device\n - Natural sharding\n - Example: \"device-sensor-001-zone-a\"\n\n\u2705 Order ID (E-commerce):\n - Unique per transaction\n - Even temporal distribution\n - Example: \"order-2024-11-15-abc123\"\n```\n\n**Bad Keys** (Low Cardinality, Hotspots):\n```yaml\n\u274c Country Code:\n - Only ~200 values\n - Uneven (US, CN >> others)\n - Creates partition hotspots\n\n\u274c Boolean Flags:\n - Only 2 values (true/false)\n - Severe imbalance\n\n\u274c Date (YYYY-MM-DD):\n - All today's traffic \u2192 1 partition\n - Temporal hotspot\n```\n\n**Compound Keys** (Best of Both):\n```yaml\n\u2705 Country + User ID:\n - Partition by country for locality\n - Sub-partition by user for distribution\n - Example: \"US:user-123\" \u2192 hash(\"US:user-123\")\n\n\u2705 Tenant + Event Type + Timestamp:\n - Multi-tenant isolation\n - Event type grouping\n - Temporal ordering\n```\n\n## Replication & High Availability\n\n### Replication Factor Guidelines\n\n```yaml\nDevelopment:\n Replication Factor: 1\n Reason: Fast, no durability needed\n\nProduction (Standard):\n Replication Factor: 3\n Reason: Balance durability vs cost\n Tolerates: 2 broker failures (with min.insync.replicas=2)\n\nProduction (Critical):\n Replication Factor: 5\n Reason: Maximum durability\n Tolerates: 4 broker failures (with min.insync.replicas=3)\n Use Cases: Financial transactions, audit logs\n\nMulti-Datacenter:\n Replication Factor: 3 per DC (6 total)\n Reason: DC-level fault tolerance\n Requires: MirrorMaker 2 or Confluent Replicator\n```\n\n### min.insync.replicas\n\n**Configuration**:\n```yaml\nmin.insync.replicas=2:\n - At least 2 replicas must acknowledge writes\n - Typical for replication.factor=3\n - Prevents data loss if 1 broker fails\n\nmin.insync.replicas=1:\n - Only leader must acknowledge (dangerous!)\n - Use only for non-critical topics\n\nmin.insync.replicas=3:\n - At least 3 replicas must acknowledge\n - For replication.factor=5 (critical systems)\n```\n\n**Rule**: `min.insync.replicas \u2264 replication.factor - 1` (to allow 1 replica failure)\n\n### Rack Awareness\n\n```yaml\nConfiguration:\n broker.rack=rack1 # Broker 1\n broker.rack=rack2 # Broker 2\n broker.rack=rack3 # Broker 3\n\nBenefit:\n - Replicas spread across racks\n - Survives rack-level failures (power, network)\n - Example: Topic with RF=3 \u2192 1 replica per rack\n\nPlacement:\n Leader: rack1\n Follower 1: rack2\n Follower 2: rack3\n```\n\n## Retention Strategies\n\n### Time-Based Retention\n\n```yaml\nShort-Term (Events, Logs):\n retention.ms: 86400000 # 1 day\n Use Cases: Real-time analytics, monitoring\n\nMedium-Term (Transactions):\n retention.ms: 604800000 # 7 days\n Use Cases: Standard business events\n\nLong-Term (Audit, Compliance):\n retention.ms: 31536000000 # 365 days\n Use Cases: Regulatory requirements, event sourcing\n\nInfinite (Event Sourcing):\n retention.ms: -1 # Forever\n cleanup.policy: compact\n Use Cases: Source of truth, state rebuilding\n```\n\n### Size-Based Retention\n\n```yaml\nretention.bytes: 10737418240 # 10 GB per partition\n\nCombined (Time OR Size):\n retention.ms: 604800000 # 7 days\n retention.bytes: 107374182400 # 100 GB\n # Whichever limit is reached first\n```\n\n### Compaction (Log Compaction)\n\n```yaml\ncleanup.policy: compact\n\nHow It Works:\n - Keeps only latest value per key\n - Deletes old versions\n - Preserves full history initially, compacts later\n\nUse Cases:\n - Database changelogs (CDC)\n - User profile updates\n - Configuration management\n - State stores\n\nExample:\n Before Compaction:\n user:123 \u2192 {name: \"Alice\", v:1}\n user:123 \u2192 {name: \"Alice\", v:2, email: \"alice@ex.com\"}\n user:123 \u2192 {name: \"Alice A.\", v:3}\n\n After Compaction:\n user:123 \u2192 {name: \"Alice A.\", v:3} # Latest only\n```\n\n## Performance Optimization\n\n### Broker Configuration\n\n```yaml\n# Network threads (handle client connections)\nnum.network.threads: 8 # Increase for high connection count\n\n# I/O threads (disk operations)\nnum.io.threads: 16 # Set to number of disks \u00d7 2\n\n# Replica fetcher threads\nnum.replica.fetchers: 4 # Increase for many partitions\n\n# Socket buffer sizes\nsocket.send.buffer.bytes: 1048576 # 1 MB\nsocket.receive.buffer.bytes: 1048576 # 1 MB\n\n# Log flush (default: OS handles flushing)\nlog.flush.interval.messages: 10000 # Flush every 10K messages\nlog.flush.interval.ms: 1000 # Or every 1 second\n```\n\n### Producer Optimization\n\n```yaml\nHigh Throughput:\n batch.size: 65536 # 64 KB\n linger.ms: 100 # Wait 100ms for batching\n compression.type: lz4 # Fast compression\n acks: 1 # Leader only\n\nLow Latency:\n batch.size: 16384 # 16 KB (default)\n linger.ms: 0 # Send immediately\n compression.type: none\n acks: 1\n\nDurability (Exactly-Once):\n batch.size: 16384\n linger.ms: 10\n compression.type: lz4\n acks: all\n enable.idempotence: true\n transactional.id: \"producer-1\"\n```\n\n### Consumer Optimization\n\n```yaml\nHigh Throughput:\n fetch.min.bytes: 1048576 # 1 MB\n fetch.max.wait.ms: 500 # Wait 500ms to accumulate\n\nLow Latency:\n fetch.min.bytes: 1 # Immediate fetch\n fetch.max.wait.ms: 100 # Short wait\n\nMax Parallelism:\n # Deploy consumers = number of partitions\n # More consumers than partitions = idle consumers\n```\n\n## Multi-Datacenter Patterns\n\n### Active-Passive (Disaster Recovery)\n\n```yaml\nArchitecture:\n Primary DC: Full Kafka cluster\n Secondary DC: Replica cluster (MirrorMaker 2)\n\nConfiguration:\n - Producers \u2192 Primary only\n - Consumers \u2192 Primary only\n - MirrorMaker 2: Primary \u2192 Secondary (async replication)\n\nFailover:\n 1. Detect primary failure\n 2. Switch producers/consumers to secondary\n 3. Promote secondary to primary\n\nRecovery Time: 5-30 minutes (manual)\nData Loss: Potential (async replication lag)\n```\n\n### Active-Active (Geo-Replication)\n\n```yaml\nArchitecture:\n DC1: Kafka cluster (region A)\n DC2: Kafka cluster (region B)\n Bidirectional replication via MirrorMaker 2\n\nConfiguration:\n - Producers \u2192 Nearest DC\n - Consumers \u2192 Nearest DC or both\n - Conflict resolution: Last-write-wins or custom\n\nChallenges:\n - Duplicate messages (at-least-once delivery)\n - Ordering across DCs not guaranteed\n - Circular replication prevention\n\nUse Cases:\n - Global applications\n - Regional compliance (GDPR)\n - Load distribution\n```\n\n### Stretch Cluster (Synchronous Replication)\n\n```yaml\nArchitecture:\n Single Kafka cluster spanning 2 DCs\n Rack awareness: DC1 = rack1, DC2 = rack2\n\nConfiguration:\n min.insync.replicas: 2\n replication.factor: 4 (2 per DC)\n acks: all\n\nRequirements:\n - Low latency between DCs (<10ms)\n - High bandwidth link (10+ Gbps)\n - Dedicated fiber\n\nTrade-offs:\n Pros: Synchronous replication, zero data loss\n Cons: Latency penalty, network dependency\n```\n\n## Monitoring & Observability\n\n### Key Metrics\n\n**Broker Metrics**:\n```yaml\nUnderReplicatedPartitions:\n Alert: > 0 for > 5 minutes\n Indicates: Replica lag, broker failure\n\nOfflinePartitionsCount:\n Alert: > 0\n Indicates: No leader elected (critical!)\n\nActiveControllerCount:\n Alert: != 1 (should be exactly 1)\n Indicates: Split brain or no controller\n\nRequestHandlerAvgIdlePercent:\n Alert: < 20%\n Indicates: Broker CPU saturation\n```\n\n**Topic Metrics**:\n```yaml\nMessagesInPerSec:\n Monitor: Throughput trends\n Alert: Sudden drops (producer failure)\n\nBytesInPerSec / BytesOutPerSec:\n Monitor: Network utilization\n Alert: Approaching NIC limits\n\nRecordsLagMax (Consumer):\n Alert: > 10000 or growing\n Indicates: Consumer can't keep up\n```\n\n**Disk Metrics**:\n```yaml\nLogSegmentSize:\n Monitor: Disk usage trends\n Alert: > 80% capacity\n\nLogFlushRateAndTimeMs:\n Monitor: Disk write latency\n Alert: > 100ms p99 (slow disk)\n```\n\n## Security Patterns\n\n### Authentication & Authorization\n\n```yaml\nSASL/SCRAM-SHA-512:\n - Industry standard\n - User/password authentication\n - Stored in ZooKeeper/KRaft\n\nACLs (Access Control Lists):\n - Per-topic, per-group permissions\n - Operations: READ, WRITE, CREATE, DELETE, ALTER\n - Example:\n bin/kafka-acls.sh --add \\\n --allow-principal User:alice \\\n --operation READ \\\n --topic orders\n\nmTLS (Mutual TLS):\n - Certificate-based auth\n - Strong cryptographic identity\n - Best for service-to-service\n```\n\n## Integration with SpecWeave\n\n**Automatic Architecture Detection**:\n```typescript\nimport { ClusterSizingCalculator } from './lib/utils/sizing';\n\nconst calculator = new ClusterSizingCalculator();\nconst recommendation = calculator.calculate({\n throughputMBps: 200,\n retentionDays: 30,\n replicationFactor: 3,\n topicCount: 100\n});\n\nconsole.log(recommendation);\n// {\n// brokers: 8,\n// partitionsPerBroker: 1500,\n// diskPerBroker: 6000 GB,\n// ramPerBroker: 64 GB\n// }\n```\n\n**SpecWeave Commands**:\n- `/sw-kafka:deploy` - Validates cluster sizing before deployment\n- `/sw-kafka:monitor-setup` - Configures metrics for key indicators\n\n## Related Skills\n\n- `/sw-kafka:kafka-mcp-integration` - MCP server setup\n- `/sw-kafka:kafka-cli-tools` - CLI operations\n\n## External Links\n\n- [Kafka Documentation - Architecture](https://kafka.apache.org/documentation/#design)\n- [Confluent - Kafka Sizing](https://www.confluent.io/blog/how-to-choose-the-number-of-topics-partitions-in-a-kafka-cluster/)\n- [KRaft Mode Overview](https://kafka.apache.org/documentation/#kraft)\n- [LinkedIn Engineering - Kafka at Scale](https://engineering.linkedin.com/kafka/running-kafka-scale)\n",
49 "arguments": [],
50 "icons": null,
51 "meta": null
52 }
53 ],
54 "resources": [
55 {
56 "name": "_meta.json",
57 "title": null,
58 "uri": "skill://_meta.json",
59 "description": "{\n \"owner\": \"anton-abyzov\",\n \"slug\": \"sw-kafka-architecture\",\n \"displayName\": \"Kafka Architecture\",\n \"latest\": {\n \"version\": \"1.0.0\",\n \"publishedAt\": 1770713719529,\n \"commit\": \"https://github.com/openclaw/skills/commit/5f0cffbc5691b7258192ee0fd32dd54822a334ba\"\n },\n \"history\": []\n}\n",
60 "mimeType": null,
61 "size": null,
62 "icons": null,
63 "annotations": null,
64 "meta": null
65 }
66 ],
67 "resource_templates": [],
68 "tools": []
69 },
70 "error": null
71 }
72 ],
73 "issues": [
74 {
75 "code": "W004",
76 "message": "The MCP server is not in our registry.",
77 "reference": [
78 0,
79 null
80 ],
81 "extra_data": null
82 }
83 ],
84 "labels": [
85 [
86 {
87 "is_public_sink": 0,
88 "destructive": 0,
89 "untrusted_content": 0,
90 "private_data": 0
91 },
92 {
93 "is_public_sink": 0,
94 "destructive": 0,
95 "untrusted_content": 0,
96 "private_data": 0
97 }
98 ]
99 ],
100 "error": null
101 }
102}
103
104Process exited with code 0
105✓ Completed in 203111ms
npm-audit
No package.json found — skipping npm audit
No package.json found — skipping npm audit
View logs
npm-audit0ms
1No package.json found at /tmp/clawguard-scan-c1Ca9R/repo/skills/anton-abyzov/sw-kafka-architecture/package.json
2Skipping npm audit.

Scanned: 2/11/2026, 7:24:26 PM