The Supabase CLI provides tools to develop your project locally and deploy to the Supabase Platform.
The CLI is still under development, but it contains all the functionality for working with your Supabase projects and the Supabase Platform.
Supabase CLI supports global flags for every command.
Flags
--debug
Optional
no type
output debug logs to stderr
--dns-resolver <[ native | https ]>
Optional
no type
lookup domain names using the specified resolver
--experimental
Optional
no type
enable experimental features
-h, --help
Optional
no type
help for supabase
--workdir <string>
Optional
no type
path to a Supabase project directory
$ supabase init
Initialize configurations for Supabase local development.
A supabase/config.toml file is created in your current working directory. This configuration is specific to each local project.
You may override the directory path by specifying the SUPABASE_WORKDIR environment variable or --workdir flag.
In addition to config.toml, the supabase directory may also contain other Supabase objects, such as migrations, functions, tests, etc.
Flags
--use-orioledb
Optional
no type
Use OrioleDB storage engine for Postgres
--with-vscode-workspace
Optional
no type
Generate VS Code workspace.
supabase init
$ supabase login
Connect the Supabase CLI to your Supabase account by logging in with your personal access token.
Your access token is stored securely in native credentials storage. If native credentials storage is unavailable, it will be written to a plain text file at ~/.supabase/access-token.
If this behavior is not desired, such as in a CI environment, you may skip login by specifying the SUPABASE_ACCESS_TOKEN environment variable in other commands.
The Supabase CLI uses the stored token to access Management APIs for projects, functions, secrets, etc.
Flags
--name <string>
Optional
no type
Name that will be used to store token in your settings, defaults to built-in token name generator
--no-browser
Optional
no type
Do not open browser automatically
--token <string>
Optional
no type
Use provided token instead of automatic login flow
supabase login
$ supabase link
Link your local development project to a hosted Supabase project.
PostgREST configurations are fetched from the Supabase platform and validated against your local configuration file.
Optionally, database settings can be validated if you provide a password. Your database password is saved in native credentials storage if available.
If you do not want to be prompted for the database password, such as in a CI environment, you may specify it explicitly via the SUPABASE_DB_PASSWORD environment variable.
Some commands like db dump, db push, and db remote commit require your project to be linked first.
Flags
-p, --password <string>
Optional
no type
Password to your remote Postgres database.
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase link --project-ref ********************
$ supabase start
Starts the Supabase local development stack.
Requires supabase/config.toml to be created in your current working directory by running supabase init.
All service containers are started by default. You can exclude those not needed by passing in -x flag. To exclude multiple containers, either pass in a comma separated string, such as -x gotrue,imgproxy, or specify -x flag multiple times.
It is recommended to have at least 7GB of RAM to start all services.
Health checks are automatically added to verify the started containers. Use --ignore-health-check flag to ignore these errors.
Flags
-x, --exclude <strings>
Optional
no type
Names of containers to not start. [gotrue,realtime,storage-api,imgproxy,kong,inbucket,postgrest,pgadmin-schema-diff,migra,postgres-meta,studio,edge-runtime,logflare,vector,pgbouncer]
--ignore-health-check
Optional
no type
Ignore unhealthy services and exit 0
supabase start
$ supabase stop
Stops the Supabase local development stack.
Requires supabase/config.toml to be created in your current working directory by running supabase init.
All Docker resources are maintained across restarts. Use --no-backup flag to reset your local development data between restarts.
Flags
--no-backup
Optional
no type
Deletes all data volumes after stopping.
--project-id <string>
Optional
no type
Local project ID to stop.
supabase stop
$ supabase status
Shows status of the Supabase local development stack.
Requires the local development stack to be started by running supabase start or supabase db start.
You can export the connection parameters for initializing supabase-js locally by specifying the -o env flag. Supported parameters include JWT_SECRET, ANON_KEY, and SERVICE_ROLE_KEY.
Requires the local development stack to be started by running supabase start.
Runs pg_prove in a container with unit test files volume mounted from supabase/tests directory. The test file can be suffixed by either .sql or .pg extension.
Since each test is wrapped in its own transaction, it will be individually rolled back regardless of success or failure.
Pulls schema changes from a remote database. A new migration file will be created under supabase/migrations directory.
Requires your local project to be linked to a remote database by running supabase link. For self-hosted databases, you can pass in the connection parameters using --db-url flag.
Optionally, a new row can be inserted into the migration history table to reflect the current state of the remote database.
If no entries exist in the migration history table, pg_dump will be used to capture all contents of the remote schemas you have created. Otherwise, this command will only diff schema changes against the remote database, similar to running db diff --linked.
Flags
--db-url <string>
Optional
no type
Pulls from the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Pulls from the linked project.
--local
Optional
no type
Pulls from the local database.
-p, --password <string>
Optional
no type
Password to your remote Postgres database.
-s, --schema <strings>
Optional
no type
Comma separated list of schema to include.
supabase db pull [migration name] [flags]
$ supabase db push
Pushes all local migrations to a remote database.
Requires your local project to be linked to a remote database by running supabase link. For self-hosted databases, you can pass in the connection parameters using --db-url flag.
The first time this command is run, a migration history table will be created under supabase_migrations.schema_migrations. After successfully applying a migration, a new row will be inserted into the migration history table with timestamp as its unique id. Subsequent pushes will skip migrations that have already been applied.
If you need to mutate the migration history table, such as deleting existing entries or inserting new entries without actually running the migration, use the migration repair command.
Use the --dry-run flag to view the list of changes before applying.
Flags
--db-url <string>
Optional
no type
Pushes to the database specified by the connection string (must be percent-encoded).
--dry-run
Optional
no type
Print the migrations that would be applied, but don't actually apply them.
--include-all
Optional
no type
Include all migrations not found on remote history table.
--include-roles
Optional
no type
Include custom roles from supabase/roles.sql.
--include-seed
Optional
no type
Include seed data from supabase/seed.sql.
--linked
Optional
no type
Pushes to the linked project.
--local
Optional
no type
Pushes to the local database.
-p, --password <string>
Optional
no type
Password to your remote Postgres database.
supabase db push
$ supabase db reset
Resets the local database to a clean state.
Requires the local development stack to be started by running supabase start.
Recreates the local Postgres container and applies all local migrations found in supabase/migrations directory. If test data is defined in supabase/seed.sql, it will be seeded after the migrations are run. Any other data or schema changes made during local development will be discarded.
Note that since Postgres roles are cluster level entities, those changes will persist between resets. In order to reset custom roles, you need to restart the local development stack.
Flags
--db-url <string>
Optional
no type
Resets the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Resets the linked project with local migrations.
--local
Optional
no type
Resets the local database with local migrations.
--version <string>
Optional
no type
Reset up to the specified version.
supabase db reset
$ supabase db dump
Dumps contents from a remote database.
Requires your local project to be linked to a remote database by running supabase link. For self-hosted databases, you can pass in the connection parameters using --db-url flag.
Runs pg_dump in a container with additional flags to exclude Supabase managed schemas. The ignored schemas include auth, storage, and those created by extensions.
The default dump does not contain any data or custom roles. To dump those contents explicitly, specify either the --data-only and --role-only flag.
Flags
--data-only
Optional
no type
Dumps only data records.
--db-url <string>
Optional
no type
Dumps from the database specified by the connection string (must be percent-encoded).
--dry-run
Optional
no type
Prints the pg_dump script that would be executed.
-f, --file <string>
Optional
no type
File path to save the dumped contents.
--keep-comments
Optional
no type
Keeps commented lines from pg_dump output.
--linked
Optional
no type
Dumps from the linked project.
--local
Optional
no type
Dumps from the local database.
-p, --password <string>
Optional
no type
Password to your remote Postgres database.
--role-only
Optional
no type
Dumps only cluster roles.
-s, --schema <strings>
Optional
no type
Comma separated list of schema to include.
--use-copy
Optional
no type
Uses copy statements in place of inserts.
supabase db dump -f supabase/schema.sql
$ supabase db diff
Diffs schema changes made to the local or remote database.
Requires the local development stack to be running when diffing against the local database. To diff against a remote or self-hosted database, specify the --linked or --db-url flag respectively.
Runs djrobstep/migra in a container to compare schema differences between the target database and a shadow database. The shadow database is created by applying migrations in local supabase/migrations directory in a separate container. Output is written to stdout by default. For convenience, you can also save the schema diff as a new migration file by passing in -f flag.
By default, all schemas in the target database are diffed. Use the --schema public,extensions flag to restrict diffing to a subset of schemas.
While the diff command is able to capture most schema changes, there are cases where it is known to fail. Currently, this could happen if you schema contains:
Changes to publication
Changes to storage buckets
Views with security_invoker attributes
Flags
--db-url <string>
Optional
no type
Diffs against the database specified by the connection string (must be percent-encoded).
-f, --file <string>
Optional
no type
Saves schema diff to a new migration file.
--linked
Optional
no type
Diffs local migration files against the linked project.
--local
Optional
no type
Diffs local migration files against the local database.
-s, --schema <strings>
Optional
no type
Comma separated list of schema to include.
--use-migra
Optional
no type
Use migra to generate schema diff.
--use-pgadmin
Optional
no type
Use pgAdmin to generate schema diff.
supabase db diff -f my_table
$ supabase db lint
Lints local database for schema errors.
Requires the local development stack to be running when linting against the local database. To lint against a remote or self-hosted database, specify the --linked or --db-url flag respectively.
Runs plpgsql_check extension in the local Postgres container to check for errors in all schemas. The default lint level is warning and can be raised to error via the --level flag.
To lint against specific schemas only, pass in the --schema flag.
Flags
--db-url <string>
Optional
no type
Lints the database specified by the connection string (must be percent-encoded).
A supabase/migrations directory will be created if it does not already exists in your current workdir. All schema migration files must be created in this directory following the pattern <timestamp>_<name>.sql.
Outputs from other commands like db diff may be piped to migration new <name> via stdin.
supabase migration new schema_test
$ supabase migration list
Lists migration history in both local and remote databases.
Requires your local project to be linked to a remote database by running supabase link. For self-hosted databases, you can pass in the connection parameters using --db-url flag.
Note that URL strings must be escaped according to RFC 3986.
Local migrations are stored in supabase/migrations directory while remote migrations are tracked in supabase_migrations.schema_migrations table. Only the timestamps are compared to identify any differences.
In case of discrepancies between the local and remote migration history, you can resolve them using the migration repair command.
Flags
--db-url <string>
Optional
no type
Lists migrations of the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Lists migrations applied to the linked project.
--local
Optional
no type
Lists migrations applied to the local database.
-p, --password <string>
Optional
no type
Password to your remote Postgres database.
supabase migration list
$ supabase migration repair
Repairs the remote migration history table.
Requires your local project to be linked to a remote database by running supabase link.
If your local and remote migration history goes out of sync, you can repair the remote history by marking specific migrations as --status applied or --status reverted. Marking as reverted will delete an existing record from the migration history table while marking as applied will insert a new record.
For example, your migration history table may look like this after running db remote commit for the first time.
$ supabase migration list
LOCAL │ REMOTE │ TIME (UTC)
─────────────────┼────────────────┼──────────────────────
20230103054303 │ 20230103054303 │ 2023-01-03 05:43:03
To reset your migration history to a clean state, first delete your local migration file.
$ rm supabase/migrations/20230103054303_remote_commit.sql
$ supabase migration list
LOCAL │ REMOTE │ TIME (UTC)
─────────────────┼────────────────┼──────────────────────
│ 20230103054303 │ 2023-01-03 05:43:03
Then mark the remote migration 20230103054303 as reverted.
$ supabase migration repair 20230103054303 --status reverted
Repaired migration history: 20230103054303 => reverted
$ supabase migration list
LOCAL │ REMOTE │ TIME (UTC)
─────────────────┼────────────────┼──────────────────────
Now you can run db remote commit again to dump the remote schema as a local migration file.
Flags
--db-url <string>
Optional
no type
Repairs migrations of the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Repairs the migration history of the linked project.
--local
Optional
no type
Repairs the migration history of the local database.
This command is much like the supabase inspect db outliers command, but ordered by the number of times a statement has been called.
You can use this information to see which queries are called most often, which can potentially be good candidates for optimisation.
QUERY │ TOTAL EXECUTION TIME │ PROPORTION OF TOTAL EXEC TIME │ NUMBER CALLS │ SYNC IO TIME
─────────────────────────────────────────────────┼──────────────────────┼───────────────────────────────┼──────────────┼──────────────────
SELECT * FROM users WHERE id = $1 │ 14:50:11.828939 │ 89.8% │ 183,389,757 │ 00:00:00.002018
SELECT * FROM user_events │ 01:20:23.466633 │ 1.4% │ 78,325 │ 00:00:00
INSERT INTO users (email, name) VALUES ($1, $2)│ 00:40:11.616882 │ 0.8% │ 54,003 │ 00:00:00.000322
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db calls
$ supabase inspect db long-running-queries
This command displays currently running queries, that have been running for longer than 5 minutes, descending by duration. Very long running queries can be a source of multiple issues, such as preventing DDL statements completing or vacuum being unable to update relfrozenxid.
PID │ DURATION │ QUERY
───────┼─────────────────┼───────────────────────────────────────────────────────────────────────────────────────
19578 | 02:29:11.200129 | EXPLAIN SELECT "students".* FROM "students" WHERE "students"."id" = 1450645 LIMIT 1
19465 | 02:26:05.542653 | EXPLAIN SELECT "students".* FROM "students" WHERE "students"."id" = 1889881 LIMIT 1
19632 | 02:24:46.962818 | EXPLAIN SELECT "students".* FROM "students" WHERE "students"."id" = 1581884 LIMIT 1
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db long-running-queries
$ supabase inspect db outliers
This command displays statements, obtained from pg_stat_statements, ordered by the amount of time to execute in aggregate. This includes the statement itself, the total execution time for that statement, the proportion of total execution time for all statements that statement has taken up, the number of times that statement has been called, and the amount of time that statement spent on synchronous I/O (reading/writing from the file system).
Typically, an efficient query will have an appropriate ratio of calls to total execution time, with as little time spent on I/O as possible. Queries that have a high total execution time but low call count should be investigated to improve their performance. Queries that have a high proportion of execution time being spent on synchronous I/O should also be investigated.
QUERY │ EXECUTION TIME │ PROPORTION OF EXEC TIME │ NUMBER CALLS │ SYNC IO TIME
─────────────────────────────────────────┼──────────────────┼─────────────────────────┼──────────────┼───────────────
SELECT * FROM archivable_usage_events.. │ 154:39:26.431466 │ 72.2% │ 34,211,877 │ 00:00:00
COPY public.archivable_usage_events (.. │ 50:38:33.198418 │ 23.6% │ 13 │ 13:34:21.00108
COPY public.usage_events (id, reporte.. │ 02:32:16.335233 │ 1.2% │ 13 │ 00:34:19.784318
INSERT INTO usage_events (id, retaine.. │ 01:42:59.436532 │ 0.8% │ 12,328,187 │ 00:00:00
SELECT * FROM usage_events WHERE (alp.. │ 01:18:10.754354 │ 0.6% │ 102,114,301 │ 00:00:00
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db outliers
$ supabase inspect db blocking
This command shows you statements that are currently holding locks and blocking, as well as the statement that is being blocked. This can be used in conjunction with inspect db locks to determine which statements need to be terminated in order to resolve lock contention.
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db blocking
$ supabase inspect db locks
This command displays queries that have taken out an exclusive lock on a relation. Exclusive locks typically prevent other operations on that relation from taking place, and can be a cause of "hung" queries that are waiting for a lock to be granted.
If you see a query that is hanging for a very long time or causing blocking issues you may consider killing the query by connecting to the database and running SELECT pg_cancel_backend(PID); to cancel the query. If the query still does not stop you can force a hard stop by running SELECT pg_terminate_backend(PID);
PID │ RELNAME │ TRANSACTION ID │ GRANTED │ QUERY │ AGE
─────────┼─────────┼────────────────┼─────────┼─────────────────────────────────────────┼───────────
328112 │ null │ 0 │ t │ SELECT * FROM logs; │ 00:04:20
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db locks
$ supabase inspect db total-index-size
This command displays the total size of all indexes on the database. It is calculated by taking the number of pages (reported in relpages) and multiplying it by the page size (8192 bytes).
SIZE
─────────
12 MB
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db total-index-size
$ supabase inspect db index-sizes
This command displays the size of each each index in the database. It is calculated by taking the number of pages (reported in relpages) and multiplying it by the page size (8192 bytes).
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db index-sizes
$ supabase inspect db index-usage
This command provides information on the efficiency of indexes, represented as what percentage of total scans were index scans. A low percentage can indicate under indexing, or wrong data being indexed.
TABLE NAME │ PERCENTAGE OF TIMES INDEX USED │ ROWS IN TABLE
────────────────────┼────────────────────────────────┼────────────────
user_events │ 99 │ 4225318
user_feed │ 99 │ 3581573
unindexed_table │ 0 │ 322911
job │ 100 │ 33242
schema_migrations │ 97 │ 0
migrations │ Insufficient data │ 0
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db index-usage
$ supabase inspect db unused-indexes
This command displays indexes that have < 50 scans recorded against them, and are greater than 5 pages in size, ordered by size relative to the number of index scans. This command is generally useful for discovering indexes that are unused. Indexes can impact write performance, as well as read performance should they occupy space in memory, its a good idea to remove indexes that are not needed or being used.
TABLE │ INDEX │ INDEX SIZE │ INDEX SCANS
─────────────────────┼────────────────────────────────────────────┼────────────┼──────────────
public.users │ user_id_created_at_idx │ 97 MB │ 0
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db unused-indexes
$ supabase inspect db total-table-sizes
This command displays the total size of each table in the database. It is the sum of the values that pg_table_size() and pg_indexes_size() gives for each table. System tables inside pg_catalog and information_schema are not included.
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db total-table-sizes
$ supabase inspect db table-sizes
This command displays the size of each table in the database. It is calculated by using the system administration function pg_table_size(), which includes the size of the main data fork, free space map, visibility map and TOAST data. It does not include the size of the table's indexes.
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db table-index-sizes
$ supabase inspect db cache-hit
This command provides information on the efficiency of the buffer cache and how often your queries have to go hit the disk rather than reading from memory. Information on both index reads (index hit rate) as well as table reads (table hit rate) are shown. In general, databases with low cache hit rates perform worse as it is slower to go to disk than retrieve data from memory. If your table hit rate is low, this can indicate that you do not have enough RAM and you may benefit from upgrading to a larger compute addon with more memory. If your index hit rate is low, this may indicate that there is scope to add more appropriate indexes.
The hit rates are calculated as a ratio of number of table or index blocks fetched from the postgres buffer cache against the sum of cached blocks and uncached blocks read from disk.
On smaller compute plans (free, small, medium), a ratio of below 99% can indicate a problem. On larger plans the hit rates may be lower but performance will remain constant as the data may use the OS cache rather than Postgres buffer cache.
NAME │ RATIO
─────────────────┼───────────
index hit rate │ 0.996621
table hit rate │ 0.999341
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db cache-hit
$ supabase inspect db table-record-counts
This command displays an estimated count of rows per table, descending by estimated count. The estimated count is derived from n_live_tup, which is updated by vacuum operations. Due to the way n_live_tup is populated, sparse vs. dense pages can result in estimations that are significantly out from the real count of rows.
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db table-record-counts
$ supabase inspect db seq-scans
This command displays the number of sequential scans recorded against all tables, descending by count of sequential scans. Tables that have very high numbers of sequential scans may be underindexed, and it may be worth investigating queries that read from these tables.
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db seq-scans
$ supabase inspect db replication-slots
This command shows information about logical replication slots that are setup on the database. It shows if the slot is active, the state of the WAL sender process ('startup', 'catchup', 'streaming', 'backup', 'stopping') the replication client address and the replication lag in GB.
This command is useful to check that the amount of replication lag is as low as possible, replication lag can occur due to network latency issues, slow disk I/O, long running transactions or lack of ability for the subscriber to consume WAL fast enough.
NAME │ ACTIVE │ STATE │ REPLICATION CLIENT ADDRESS │ REPLICATION LAG GB
─────────────────────────────────────────────┼────────┼─────────┼────────────────────────────┼─────────────────────
supabase_realtime_replication_slot │ t │ N/A │ N/A │ 0
datastream │ t │ catchup │ 24.201.24.106 │ 45
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db replication-slots
$ supabase inspect db role-connections
This command shows the number of active connections for each database roles to see which specific role might be consuming more connections than expected.
This is a Supabase specific command. You can see this breakdown on the dashboard as well:
https://app.supabase.com/project/_/database/roles
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db role-connections
$ supabase inspect db bloat
This command displays an estimation of table "bloat" - Due to Postgres' MVCC when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the autovaccum process will asynchronously clean the dead tuples. Sometimes the autovaccum is unable to work fast enough to reduce or prevent tables from becoming bloated. High bloat can slow down queries, cause excessive IOPS and waste space in your database.
Tables with a high bloat ratio should be investigated to see if there are vacuuming is not quick enough or there are other issues.
TYPE │ SCHEMA NAME │ OBJECT NAME │ BLOAT │ WASTE
────────┼─────────────┼────────────────────────────┼───────┼─────────────
table │ public │ very_bloated_table │ 41.0 │ 700 MB
table │ public │ my_table │ 4.0 │ 76 MB
table │ public │ happy_table │ 1.0 │ 1472 kB
index │ public │ happy_table::my_nice_index │ 0.7 │ 880 kB
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
--linked
Optional
no type
Inspect the linked project.
--local
Optional
no type
Inspect the local database.
supabase inspect db bloat
$ supabase inspect db vacuum-stats
This shows you stats about the vacuum activities for each table. Due to Postgres' MVCC when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the autovaccum process will aysnchronously clean the dead tuples.
The command lists when the last vacuum and last auto vacuum took place, the row count on the table as well as the count of dead rows and whether autovacuum is expected to run or not. If the number of dead rows is much higher than the row count, or if an autovacuum is expected but has not been performed for some time, this can indicate that autovacuum is not able to keep up and that your vacuum settings need to be tweaked or that you require more compute or disk IOPS to allow autovaccum to complete.
SCHEMA │ TABLE │ LAST VACUUM │ LAST AUTO VACUUM │ ROW COUNT │ DEAD ROW COUNT │ EXPECT AUTOVACUUM?
──────────────────────┼──────────────────────────────────┼─────────────┼──────────────────┼──────────────────────┼────────────────┼─────────────────────
auth │ users │ │ 2023-06-26 12:34 │ 18,030 │ 0 │ no
public │ profiles │ │ 2023-06-26 23:45 │ 13,420 │ 28 │ no
public │ logs │ │ 2023-06-26 01:23 │ 1,313,033 │ 3,318,228 │ yes
storage │ objects │ │ │ No stats │ 0 │ no
storage │ buckets │ │ │ No stats │ 0 │ no
supabase_migrations │ schema_migrations │ │ │ No stats │ 0 │ no
Flags
--db-url <string>
Optional
no type
Inspect the database specified by the connection string (must be percent-encoded).
List all connections to a SSO identity provider to your Supabase project.
Flags
-o, --output <[ pretty | json | toml | yaml ]>
Optional
no type
Output format
pretty
REQUIRED
[ pretty | json | toml | yaml ]
json
REQUIRED
[ pretty | json | toml | yaml ]
toml
REQUIRED
[ pretty | json | toml | yaml ]
yaml
REQUIRED
[ pretty | json | toml | yaml ]
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase sso list
$ supabase sso show
Provides the information about an established connection to an identity provider. You can use --metadata to obtain the raw SAML 2.0 Metadata XML document stored in your project's configuration.
Flags
--metadata
Optional
no type
Show SAML 2.0 XML Metadata only
-o, --output <[ pretty | json | toml | yaml ]>
Optional
no type
Output format
pretty
REQUIRED
[ pretty | json | toml | yaml ]
json
REQUIRED
[ pretty | json | toml | yaml ]
toml
REQUIRED
[ pretty | json | toml | yaml ]
yaml
REQUIRED
[ pretty | json | toml | yaml ]
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase sso show 6df4d73f-bf21-405f-a084-b11adf19fea5 \
--project-ref abcdefghijklmnopqrst
$ supabase sso info
Returns all of the important SSO information necessary for your project to be registered with a SAML 2.0 compatible identity provider.
Flags
-o, --output <[ pretty | json | toml | yaml ]>
Optional
no type
Output format
pretty
REQUIRED
[ pretty | json | toml | yaml ]
json
REQUIRED
[ pretty | json | toml | yaml ]
toml
REQUIRED
[ pretty | json | toml | yaml ]
yaml
REQUIRED
[ pretty | json | toml | yaml ]
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase sso info --project-ref abcdefghijklmnopqrst
$ supabase sso update
Update the configuration settings of a already added SSO identity provider.
Flags
--add-domains <strings>
Optional
no type
Add this comma separated list of email domains to the identity provider.
--attribute-mapping-file <string>
Optional
no type
File containing a JSON mapping between SAML attributes to custom JWT claims.
--domains <strings>
Optional
no type
Replace domains with this comma separated list of email domains.
--metadata-file <string>
Optional
no type
File containing a SAML 2.0 Metadata XML document describing the identity provider.
--metadata-url <string>
Optional
no type
URL pointing to a SAML 2.0 Metadata XML document describing the identity provider.
--remove-domains <strings>
Optional
no type
Remove this comma separated list of email domains from the identity provider.
--skip-url-validation
Optional
no type
Whether local validation of the SAML 2.0 Metadata URL should not be performed.
Remove a connection to an already added SSO identity provider. Removing the provider will prevent existing users from logging in. Please treat this command with care.
Activates the custom hostname configuration for a project.
This reconfigures your Supabase project to respond to requests on your custom hostname.
After the custom hostname is activated, your project's auth services will no longer function on the Supabase-provisioned subdomain.
Flags
--include-raw-output
Optional
no type
Include raw output (useful for debugging).
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase domains activate
$ supabase domains create
Create a custom hostname for your Supabase project.
Expects your custom hostname to have a CNAME record to your Supabase project's subdomain.
Flags
--custom-hostname <string>
Optional
no type
The custom hostname to use for your Supabase project.
--include-raw-output
Optional
no type
Include raw output (useful for debugging).
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase domains create [flags]
$ supabase domains get
Retrieve the custom hostname config for your project, as stored in the Supabase platform.
Flags
--include-raw-output
Optional
no type
Include raw output (useful for debugging).
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase domains get
$ supabase domains reverify
Re-verify the custom hostname config for your project
Flags
--include-raw-output
Optional
no type
Include raw output (useful for debugging).
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase domains reverify
$ supabase domains delete
Deletes the custom hostname config for your project
Flags
--include-raw-output
Optional
no type
Include raw output (useful for debugging).
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase domains delete
$ supabase vanity-subdomains
Manage vanity subdomains for Supabase projects.
Usage of vanity subdomains and custom domains is mutually exclusive.
Activate a vanity subdomain for your Supabase project.
This reconfigures your Supabase project to respond to requests on your vanity subdomain.
After the vanity subdomain is activated, your project's auth services will no longer function on the {project-ref}.{supabase-domain} hostname.
Flags
--desired-subdomain <string>
Optional
no type
The desired vanity subdomain to use for your Supabase project.
--experimental
REQUIRED
no type
enable experimental features
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase vanity-subdomains activate [flags]
$ supabase vanity-subdomains get
Get the current vanity subdomain
Flags
--experimental
REQUIRED
no type
enable experimental features
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase vanity-subdomains get
$ supabase vanity-subdomains check-availability
Checks if a desired subdomain is available for use
Flags
--desired-subdomain <string>
Optional
no type
The desired vanity subdomain to use for your Supabase project.
Get the current Postgres database config overrides
Flags
--experimental
REQUIRED
no type
enable experimental features
--project-ref <string>
Optional
no type
Project ref of the Supabase project.
supabase postgres-config get
$ supabase postgres-config update
Overriding the default Postgres config could result in unstable database behavior.
Custom configuration also overrides the optimizations generated based on the compute add-ons in use.
Flags
--config <strings>
Optional
no type
Config overrides specified as a 'key=value' pair
--replace-existing-overrides
Optional
no type
If true, replaces all existing overrides with the ones provided. If false (default), merges existing overrides with the ones provided.