Requirements
- Target platform
- OpenClaw
- Install method
- Manual import
- Extraction
- Extract archive
- Prerequisites
- OpenClaw
- Primary doc
- SKILL.md
Access Google BigQuery to run SQL queries, manage datasets and tables, and perform large-scale data analysis with OAuth authentication via the Maton API.
Access Google BigQuery to run SQL queries, manage datasets and tables, and perform large-scale data analysis with OAuth authentication via the Maton API.
Hand the extracted package to your coding agent with a concrete install brief instead of figuring it out manually.
I downloaded a skill package from Yavira. Read SKILL.md from the extracted folder and install it by following the included instructions. Tell me what you changed and call out any manual steps you could not complete.
I downloaded an updated skill package from Yavira. Read SKILL.md from the extracted folder, compare it with my current installation, and upgrade it while preserving any custom configuration unless the package docs explicitly say otherwise. Summarize what changed and any follow-up checks I should run.
Access the Google BigQuery API with managed OAuth authentication. Run SQL queries, manage datasets and tables, and analyze data at scale.
# Run a simple query python <<'EOF' import urllib.request, os, json data = json.dumps({'query': 'SELECT 1 as test_value', 'useLegacySql': False}).encode() req = urllib.request.Request('https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/{projectId}/queries', data=data, method='POST') req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}') req.add_header('Content-Type', 'application/json') print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2)) EOF
https://gateway.maton.ai/google-bigquery/bigquery/v2/{resource-path} Replace {resource-path} with the actual BigQuery API endpoint path. The gateway proxies requests to bigquery.googleapis.com and automatically injects your OAuth token.
All requests require the Maton API key in the Authorization header: Authorization: Bearer $MATON_API_KEY Environment Variable: Set your API key as MATON_API_KEY: export MATON_API_KEY="YOUR_API_KEY"
Sign in or create an account at maton.ai Go to maton.ai/settings Copy your API key
Manage your Google BigQuery OAuth connections at https://ctrl.maton.ai.
python <<'EOF' import urllib.request, os, json req = urllib.request.Request('https://ctrl.maton.ai/connections?app=google-bigquery&status=ACTIVE') req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}') print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2)) EOF
python <<'EOF' import urllib.request, os, json data = json.dumps({'app': 'google-bigquery'}).encode() req = urllib.request.Request('https://ctrl.maton.ai/connections', data=data, method='POST') req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}') req.add_header('Content-Type', 'application/json') print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2)) EOF
python <<'EOF' import urllib.request, os, json req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}') req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}') print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2)) EOF Response: { "connection": { "connection_id": "c8463a31-e5b4-4e52-9a32-e78dcd7ba7b1", "status": "ACTIVE", "creation_time": "2026-02-14T09:02:02.780520Z", "last_updated_time": "2026-02-14T09:02:19.977436Z", "url": "https://connect.maton.ai/?session_token=...", "app": "google-bigquery", "metadata": {} } } Open the returned url in a browser to complete OAuth authorization.
python <<'EOF' import urllib.request, os, json req = urllib.request.Request('https://ctrl.maton.ai/connections/{connection_id}', method='DELETE') req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}') print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2)) EOF
If you have multiple Google BigQuery connections, specify which one to use with the Maton-Connection header: python <<'EOF' import urllib.request, os, json req = urllib.request.Request('https://gateway.maton.ai/google-bigquery/bigquery/v2/projects') req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}') req.add_header('Maton-Connection', 'c8463a31-e5b4-4e52-9a32-e78dcd7ba7b1') print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2)) EOF If omitted, the gateway uses the default (oldest) active connection.
List Projects List all projects accessible to the authenticated user. GET /google-bigquery/bigquery/v2/projects Response: { "kind": "bigquery#projectList", "projects": [ { "id": "my-project-123", "numericId": "822245862053", "projectReference": { "projectId": "my-project-123" }, "friendlyName": "My Project" } ], "totalItems": 1 }
List Datasets GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets Query Parameters: maxResults - Maximum number of results to return pageToken - Token for pagination all - Include hidden datasets if true Get Dataset GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId} Create Dataset POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets Content-Type: application/json { "datasetReference": { "datasetId": "my_dataset", "projectId": "{projectId}" }, "description": "My dataset description", "location": "US" } Response: { "kind": "bigquery#dataset", "id": "my-project:my_dataset", "datasetReference": { "datasetId": "my_dataset", "projectId": "my-project" }, "location": "US", "creationTime": "1771059780773" } Update Dataset (PATCH) PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId} Content-Type: application/json { "description": "Updated description" } Delete Dataset DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId} Query Parameters: deleteContents - If true, delete all tables in the dataset (default: false)
List Tables GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables Query Parameters: maxResults - Maximum number of results to return pageToken - Token for pagination Get Table GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId} Create Table POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables Content-Type: application/json { "tableReference": { "projectId": "{projectId}", "datasetId": "{datasetId}", "tableId": "my_table" }, "schema": { "fields": [ {"name": "id", "type": "INTEGER", "mode": "REQUIRED"}, {"name": "name", "type": "STRING", "mode": "NULLABLE"}, {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"} ] } } Response: { "kind": "bigquery#table", "id": "my-project:my_dataset.my_table", "tableReference": { "projectId": "my-project", "datasetId": "my_dataset", "tableId": "my_table" }, "schema": { "fields": [ {"name": "id", "type": "INTEGER", "mode": "REQUIRED"}, {"name": "name", "type": "STRING", "mode": "NULLABLE"}, {"name": "created_at", "type": "TIMESTAMP", "mode": "NULLABLE"} ] }, "numRows": "0", "type": "TABLE" } Update Table (PATCH) PATCH /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId} Content-Type: application/json { "description": "Updated table description" } Delete Table DELETE /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}
List Table Data Retrieve rows from a table. GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/data Query Parameters: maxResults - Maximum number of results to return pageToken - Token for pagination startIndex - Zero-based index of the starting row Response: { "kind": "bigquery#tableDataList", "totalRows": "100", "rows": [ { "f": [ {"v": "1"}, {"v": "Alice"}, {"v": "1.7710597807E9"} ] } ], "pageToken": "..." } Insert Table Data (Streaming) Insert rows into a table using streaming insert. Note: Requires BigQuery paid tier. POST /google-bigquery/bigquery/v2/projects/{projectId}/datasets/{datasetId}/tables/{tableId}/insertAll Content-Type: application/json { "rows": [ {"json": {"id": 1, "name": "Alice"}}, {"json": {"id": 2, "name": "Bob"}} ] }
Run Query (Synchronous) Execute a SQL query and return results directly. POST /google-bigquery/bigquery/v2/projects/{projectId}/queries Content-Type: application/json { "query": "SELECT * FROM `my_dataset.my_table` LIMIT 10", "useLegacySql": false, "maxResults": 100 } Response: { "kind": "bigquery#queryResponse", "schema": { "fields": [ {"name": "id", "type": "INTEGER"}, {"name": "name", "type": "STRING"} ] }, "jobReference": { "projectId": "my-project", "jobId": "job_abc123", "location": "US" }, "totalRows": "2", "rows": [ {"f": [{"v": "1"}, {"v": "Alice"}]}, {"f": [{"v": "2"}, {"v": "Bob"}]} ], "jobComplete": true, "totalBytesProcessed": "1024" } Query Parameters: useLegacySql - Use legacy SQL syntax (default: false for GoogleSQL) maxResults - Maximum results per page timeoutMs - Query timeout in milliseconds Create Job (Asynchronous) Submit a job for asynchronous execution. POST /google-bigquery/bigquery/v2/projects/{projectId}/jobs Content-Type: application/json { "configuration": { "query": { "query": "SELECT * FROM `my_dataset.my_table`", "useLegacySql": false, "destinationTable": { "projectId": "{projectId}", "datasetId": "{datasetId}", "tableId": "results_table" }, "writeDisposition": "WRITE_TRUNCATE" } } } List Jobs GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs Query Parameters: maxResults - Maximum number of results to return pageToken - Token for pagination stateFilter - Filter by job state: done, pending, running projection - full or minimal Response: { "kind": "bigquery#jobList", "jobs": [ { "id": "my-project:US.job_abc123", "jobReference": { "projectId": "my-project", "jobId": "job_abc123", "location": "US" }, "state": "DONE", "statistics": { "creationTime": "1771059781456", "startTime": "1771059782203", "endTime": "1771059782324" } } ] } Get Job GET /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId} Query Parameters: location - Job location (e.g., "US", "EU") Get Query Results Retrieve results from a completed query job. GET /google-bigquery/bigquery/v2/projects/{projectId}/queries/{jobId} Query Parameters: location - Job location maxResults - Maximum results per page pageToken - Token for pagination startIndex - Zero-based starting row Cancel Job POST /google-bigquery/bigquery/v2/projects/{projectId}/jobs/{jobId}/cancel Query Parameters: location - Job location
BigQuery uses token-based pagination. List responses include a pageToken when more results exist: GET /google-bigquery/bigquery/v2/projects/{projectId}/datasets?maxResults=10&pageToken={token} Response: { "datasets": [...], "nextPageToken": "eyJvZmZzZXQiOjEwfQ==" } Use the nextPageToken value as pageToken in subsequent requests.
// Run a query const response = await fetch( 'https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries', { method: 'POST', headers: { 'Authorization': `Bearer ${process.env.MATON_API_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ query: 'SELECT * FROM `my_dataset.my_table` LIMIT 10', useLegacySql: false }) } ); const data = await response.json(); console.log(data.rows);
import os import requests # Run a query response = requests.post( 'https://gateway.maton.ai/google-bigquery/bigquery/v2/projects/my-project/queries', headers={'Authorization': f'Bearer {os.environ["MATON_API_KEY"]}'}, json={ 'query': 'SELECT * FROM `my_dataset.my_table` LIMIT 10', 'useLegacySql': False } ) data = response.json() for row in data.get('rows', []): print([field['v'] for field in row['f']])
Common BigQuery data types for table schemas: TypeDescriptionSTRINGVariable-length character dataINTEGER64-bit signed integerFLOAT64-bit IEEE floating pointBOOLEANTrue or falseTIMESTAMPAbsolute point in timeDATECalendar dateTIMETime of dayDATETIMEDate and timeBYTESVariable-length binary dataNUMERICExact numeric value with 38 digits of precisionBIGNUMERICExact numeric value with 76+ digits of precisionGEOGRAPHYGeographic dataJSONJSON dataRECORDNested fields (also called STRUCT) Field Modes: NULLABLE - Field can be null (default) REQUIRED - Field cannot be null REPEATED - Field is an array
Project IDs are typically in the format project-name or project-name-12345 Dataset IDs follow naming rules: letters, numbers, underscores (max 1024 characters) Table IDs follow same naming rules as datasets Job IDs are generated by BigQuery and include location prefix Query results use f (fields) and v (value) structure Streaming inserts require BigQuery paid tier (not available in free tier) Use useLegacySql: false for GoogleSQL (standard SQL) syntax IMPORTANT: When using curl commands, use curl -g when URLs contain brackets to disable glob parsing IMPORTANT: When piping curl output to jq or other commands, environment variables like $MATON_API_KEY may not expand correctly in some shell environments
StatusMeaning400Missing Google BigQuery connection or invalid request401Invalid or missing Maton API key403Access denied (insufficient permissions or quota exceeded)404Resource not found (project, dataset, table, or job)409Resource already exists429Rate limited4xx/5xxPassthrough error from BigQuery API
Check that the MATON_API_KEY environment variable is set: echo $MATON_API_KEY Verify the API key is valid by listing connections: python <<'EOF' import urllib.request, os, json req = urllib.request.Request('https://ctrl.maton.ai/connections') req.add_header('Authorization', f'Bearer {os.environ["MATON_API_KEY"]}') print(json.dumps(json.load(urllib.request.urlopen(req)), indent=2)) EOF
Ensure your URL path starts with google-bigquery. For example: Correct: https://gateway.maton.ai/google-bigquery/bigquery/v2/projects Incorrect: https://gateway.maton.ai/bigquery/v2/projects
BigQuery API Overview Datasets Tables Jobs Tabledata Standard SQL Reference Maton Community Maton Support
Code helpers, APIs, CLIs, browser automation, testing, and developer operations.
Largest current source with strong distribution and engagement signals.