diff --git a/docs/content/docs/get-started/work-with-terraform/create-service-account.webp b/docs/content/docs/get-started/work-with-terraform/create-service-account.webp deleted file mode 100644 index fd45b9e1d..000000000 Binary files a/docs/content/docs/get-started/work-with-terraform/create-service-account.webp and /dev/null differ diff --git a/docs/content/docs/integrations/neon/neon-bytebase-create-instance.webp b/docs/content/docs/integrations/neon/neon-bytebase-create-instance.webp deleted file mode 100644 index a77be06f2..000000000 Binary files a/docs/content/docs/integrations/neon/neon-bytebase-create-instance.webp and /dev/null differ diff --git a/docs/content/docs/integrations/neon/neon-bytebase-database.webp b/docs/content/docs/integrations/neon/neon-bytebase-database.webp deleted file mode 100644 index f1cb5ea64..000000000 Binary files a/docs/content/docs/integrations/neon/neon-bytebase-database.webp and /dev/null differ diff --git a/docs/content/docs/integrations/neon/neon-bytebase-instance.webp b/docs/content/docs/integrations/neon/neon-bytebase-instance.webp deleted file mode 100644 index 77d882915..000000000 Binary files a/docs/content/docs/integrations/neon/neon-bytebase-instance.webp and /dev/null differ diff --git a/docs/content/docs/integrations/neon/neon-env-file-details.webp b/docs/content/docs/integrations/neon/neon-env-file-details.webp deleted file mode 100644 index 6a83ebab9..000000000 Binary files a/docs/content/docs/integrations/neon/neon-env-file-details.webp and /dev/null differ diff --git a/docs/content/docs/integrations/neon/neon-project-details.webp b/docs/content/docs/integrations/neon/neon-project-details.webp deleted file mode 100644 index 5780f0e61..000000000 Binary files a/docs/content/docs/integrations/neon/neon-project-details.webp and /dev/null differ diff --git a/docs/content/docs/integrations/prisma/ppg-bytebase-database.webp b/docs/content/docs/integrations/prisma/ppg-bytebase-database.webp deleted file mode 100644 index 0460d94b1..000000000 Binary files a/docs/content/docs/integrations/prisma/ppg-bytebase-database.webp and /dev/null differ diff --git a/docs/content/docs/integrations/prisma/ppg-create-instance.webp b/docs/content/docs/integrations/prisma/ppg-create-instance.webp deleted file mode 100644 index fe29c0658..000000000 Binary files a/docs/content/docs/integrations/prisma/ppg-create-instance.webp and /dev/null differ diff --git a/docs/content/docs/integrations/prisma/ppg-project-connection.webp b/docs/content/docs/integrations/prisma/ppg-project-connection.webp deleted file mode 100644 index ca62743ea..000000000 Binary files a/docs/content/docs/integrations/prisma/ppg-project-connection.webp and /dev/null differ diff --git a/docs/content/docs/integrations/render/render-bytebase-create-instance.webp b/docs/content/docs/integrations/render/render-bytebase-create-instance.webp deleted file mode 100644 index d405e1af3..000000000 Binary files a/docs/content/docs/integrations/render/render-bytebase-create-instance.webp and /dev/null differ diff --git a/docs/content/docs/integrations/render/render-bytebase-database.webp b/docs/content/docs/integrations/render/render-bytebase-database.webp deleted file mode 100644 index 767397cdc..000000000 Binary files a/docs/content/docs/integrations/render/render-bytebase-database.webp and /dev/null differ diff --git a/docs/content/docs/integrations/render/render-bytebase-instance.webp b/docs/content/docs/integrations/render/render-bytebase-instance.webp deleted file mode 100644 index ce57a820a..000000000 Binary files a/docs/content/docs/integrations/render/render-bytebase-instance.webp and /dev/null differ diff --git a/docs/content/docs/integrations/render/render-database-connections.webp b/docs/content/docs/integrations/render/render-database-connections.webp deleted file mode 100644 index 81256e4ef..000000000 Binary files a/docs/content/docs/integrations/render/render-database-connections.webp and /dev/null differ diff --git a/docs/content/docs/integrations/supabase/supabase-bytebase-create-instance.webp b/docs/content/docs/integrations/supabase/supabase-bytebase-create-instance.webp deleted file mode 100644 index 62b5a4f0b..000000000 Binary files a/docs/content/docs/integrations/supabase/supabase-bytebase-create-instance.webp and /dev/null differ diff --git a/docs/content/docs/integrations/supabase/supabase-bytebase-database.webp b/docs/content/docs/integrations/supabase/supabase-bytebase-database.webp deleted file mode 100644 index 13868a899..000000000 Binary files a/docs/content/docs/integrations/supabase/supabase-bytebase-database.webp and /dev/null differ diff --git a/docs/content/docs/integrations/supabase/supabase-bytebase-instance.webp b/docs/content/docs/integrations/supabase/supabase-bytebase-instance.webp deleted file mode 100644 index 06467e87a..000000000 Binary files a/docs/content/docs/integrations/supabase/supabase-bytebase-instance.webp and /dev/null differ diff --git a/docs/content/docs/integrations/supabase/supabase-database-setting.webp b/docs/content/docs/integrations/supabase/supabase-database-setting.webp deleted file mode 100644 index 39142b700..000000000 Binary files a/docs/content/docs/integrations/supabase/supabase-database-setting.webp and /dev/null differ diff --git a/docs/content/docs/integrations/supabase/supabase-sql-editor.webp b/docs/content/docs/integrations/supabase/supabase-sql-editor.webp deleted file mode 100644 index 5a5755f1e..000000000 Binary files a/docs/content/docs/integrations/supabase/supabase-sql-editor.webp and /dev/null differ diff --git a/docs/docs.json b/docs/docs.json index 3485375a2..e2471c98d 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -278,58 +278,16 @@ "security/audit-log", "administration/license", "administration/mode", + "integrations/mcp", "ai-assistant", "change-database/environment-policy/overview", "administration/database-group", "administration/customize-logo", "security/watermark", "administration/announcement", - "administration/archive" - ] - } - ] - }, - { - "tab": "Integrations", - "groups": [ - { - "group": "API", - "pages": [ - "integrations/api/overview", - "integrations/api/authentication", - "integrations/api/sql-review", - "integrations/api/sql-editor", - "integrations/api/release", - "integrations/api/plan", - "integrations/api/rollout", - "integrations/api/issue", - "integrations/api/permission", - "integrations/api/data-classification", - "integrations/api/audit-log" - ] - }, - { - "group": "Infrastructure as Code", - "pages": [ + "administration/archive", "integrations/terraform/overview" ] - }, - { - "group": "AI Integration", - "pages": [ - "integrations/mcp" - ] - }, - { - "group": "3rd Party", - "pages": [ - "integrations/slack", - "integrations/jira", - "integrations/prisma", - "integrations/supabase", - "integrations/render", - "integrations/neon" - ] } ] }, diff --git a/docs/integrations/api/audit-log.mdx b/docs/integrations/api/audit-log.mdx deleted file mode 100644 index dec5960df..000000000 --- a/docs/integrations/api/audit-log.mdx +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Audit Log ---- - - - -| | | -| ----------------- | --------------------------------------------- | -| Endpoint | [POST /v1/auditLogs:search](/api-reference/auditlogservice/post-v1auditlogs:search) | - -Bytebase provides workspace level and project level audit logs. You may call Bytebase API to export -the audit logs and send it an external log sink such as AWS S3. - -## Workspace level - -```bash -# Search -curl -X POST %%bb_api_endpoint%%/v1/auditLogs:search \ - -H 'Authorization: Bearer '${bytebase_token} -``` - -```bash -# Export in base64 -curl -X POST %%bb_api_endpoint%%/v1/auditLogs:export \ - -H 'Authorization: Bearer '${bytebase_token} \ - --data '{ - "format": "JSON" - }' -``` - -## Project level - -```bash -# Search -curl -X POST %%bb_api_endpoint%%/v1/projects/project-sample/auditLogs:search \ - -H 'Authorization: Bearer '${bytebase_token} -``` - -```bash -# Export in base64 -curl -X POST %%bb_api_endpoint%%/v1/projects/project-sample/auditLogs:export \ - -H 'Authorization: Bearer '${bytebase_token} \ - --data '{ - "format": "JSON" - }' -``` diff --git a/docs/integrations/api/authentication.mdx b/docs/integrations/api/authentication.mdx deleted file mode 100644 index 1c0baf297..000000000 --- a/docs/integrations/api/authentication.mdx +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Authentication ---- - -| | | -| -------- | ----------------------------------------------------------------------------------- | -| Endpoint | [POST /v1/auth/login](/api-reference/authservice/post-v1authlogin) | - -## Service Account - -A service account is a non-human account used by applications, scripts, or services to access the Bytebase API. The service account follows the same permission model as the normal user account. The only exception -is service account can't be added to a group as it's an [anti-pattern](https://cloud.google.com/iam/docs/best-practices-service-accounts#groups). - -In **Users & Groups** under **Security & Policy** section, **Add User** on the upper-right. Choose **Service Account** Type, fill in the email and **Confirm**. Then you can see your service account in the list. **Copy Service Key** right away. - -![create-service-account](/content/docs/get-started/work-with-terraform/create-service-account.webp) - - - -You can only copy the key right after creating the service account. The key will disappear if you refresh the page. - - - -## Login to fetch the token - -You need to obtain the exchange token before calling the API. - -```bash -export bytebase_url=%%bb_api_endpoint%% -export bytebase_account=<>@service.bytebase.com -export bytebase_password=<> - -bytebase_token=$(curl -v ${bytebase_url}/v1/auth/login \ - --data-raw '{"email":"'${bytebase_account}'","password":"'${bytebase_password}'"}' \ - --compressed 2>&1 | grep token | grep -o 'access-token=[^;]*;' | grep -o '[^;]*' | sed 's/access-token=//g; s/;//g') -``` - -## Test API - -```bash -# List projects -curl --request GET ${bytebase_url}/v1/projects \ - --header 'Authorization: Bearer '${bytebase_token} -``` diff --git a/docs/integrations/api/data-classification.mdx b/docs/integrations/api/data-classification.mdx deleted file mode 100644 index a810815de..000000000 --- a/docs/integrations/api/data-classification.mdx +++ /dev/null @@ -1,150 +0,0 @@ ---- -title: Data Classification ---- - - - -[Data Classification](/security/data-masking/data-classification/) allows you to classify columns -and manage masking policy for many columns by controlling only a small number of classifications. - -You can call Bytebase API to configure data classification. - -## Configure Classification - -| | | -| -------- | ----------------------------------------------------------------------------------------------------- | -| Endpoint | [PATCH /v1/v1settings/dataClassificationSettingValue](/api-reference/settingservice/patch-v1settings) | - -```shell -curl --request PATCH ${bytebase_url}/v1/settings/bb.workspace.data-classification \ - --header 'Authorization: Bearer '${bytebase_token} \ - --data '{ - "name": "bb.workspace.data-classification", - "value": { - "data_classification_setting_value": { - "configs": [ - { - "title": "Classification Example", - "levels": [ - { - "id": "1", - "title": "Level 1", - "description": "" - }, - { - "id": "2", - "title": "Level 2", - "description": "" - } - ], - "classification": { - "1": { - "id": "1", - "title": "Basic", - "description": "" - }, - "1-1": { - "id": "1-1", - "title": "Basic", - "description": "", - "levelId": "1" - }, - "1-2": { - "id": "1-2", - "title": "Assert", - "description": "", - "levelId": "1" - }, - "1-3": { - "id": "1-3", - "title": "Contact", - "description": "", - "levelId": "2" - }, - "1-4": { - "id": "1-4", - "title": "Health", - "description": "", - "levelId": "2" - }, - "2": { - "id": "2", - "title": "Relationship", - "description": "" - }, - "2-1": { - "id": "2-1", - "title": "Social", - "description": "", - "levelId": "1" - }, - "2-2": { - "id": "2-2", - "title": "Business", - "description": "", - "levelId": "1" - } - } - } - ] - } - } -}' -``` - -## Classify All Columns and Tables in a Database - -| | | -| -------- | --------------------------------------------------------------------------------------------- | -| Endpoint | [PATCH /v1/v1instances/databases](/api-reference/databaseservice/patch-v1instances-databases) | - - - -The API only supports to classify **an entire database at once**. You need to pass the entire schema configs for the target database. The passed schema configs will overwrite -the existing schema configs. - - - -```shell -curl --request PATCH ${bytebase_url}/v1/instances/prod-sample-instance/databases/hr_prod/metadata?update_mask=schema_configs \ - --header 'Authorization: Bearer '${bytebase_token} \ - --data '{ - "name": "instances/prod-sample-instance/databases/hr_prod/metadata", - "schemaConfigs": [ - { - "name": "public", - "tableConfigs": [ - { - "name": "department", - "columnConfigs": [ - { - "name": "dept_no", - "semanticTypeId": "", - "labels": {}, - "classificationId": "1-1" - }, - { - "name": "dept_name", - "semanticTypeId": "", - "labels": {}, - "classificationId": "1-2" - } - ] - }, - { - "name": "dept_emp", - "columnConfigs": [ - { - "name": "dept_no", - "semanticTypeId": "", - "labels": {}, - "classificationId": "1-1" - } - ] - } - ] - } - ] -}' - -``` diff --git a/docs/integrations/api/issue.mdx b/docs/integrations/api/issue.mdx deleted file mode 100644 index 20f408fa9..000000000 --- a/docs/integrations/api/issue.mdx +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: Issue ---- - - - -| | | -| ------------- | ------------------------------------------- | -| Issue Endpoint | [POST /v1/projects/-/issues](/api-reference/issueservice/post-v1projects-issues) | - -`Issue` drives the database operations in Bytebase. The issue contains following info: - -- **Issue metadata.** e.g `title` and `description`. -- **Plan.** Contain one or multiple change statements and dictate how they are grouped and ordered. The plan layouts how to execute the change statements. - - **Sheet**. Plan references change statements via the `Sheet` object. Each `Sheet` contains one or more change statements. - - **Step**. Plan orchestrates the order via `Step`. Each `Step` specifies one or more changes units. A change unit specifies the SQL statements via `Sheet` and the target database. -- **Rollout.** The actual execution of the plan. - -## How to create an issue - -Code sample: https://github.com/bytebase/upsert-issue-action/blob/main/src/main.ts#L86-L92 - -```go -// Create plan -let plan = await createPlan(changes, title, description); - -// Create rollout -let rollout = await createRollout(plan.name) - -// Create issue -issue = await createIssue(plan.name, rollout.name, title, description); -``` - -### Step 1: Create a plan. - -See [Plan API](/integrations/api/plan). - -### Step 2: Create a rollout for the plan - -See [Rollout API](/integrations/api/rollout). - -### Step 3: Create an issue including both the plan and the rollout diff --git a/docs/integrations/api/overview.mdx b/docs/integrations/api/overview.mdx deleted file mode 100644 index e242faf41..000000000 --- a/docs/integrations/api/overview.mdx +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Overview ---- - -Bytebase provides [gRPC](https://github.com/bytebase/bytebase/tree/main/proto/gen/grpc-doc/v1) and [RESTful HTTP API](https://api.bytebase.com). - -You can manipulate every aspect of Bytebase via API. In fact, the Bytebase UI console is built on the -same API. You can use Bytebase as a headless database workflow backend and integrate it with your own -internal developer platform: - -- [SQL lint](/integrations/api/sql-review/) - -- [Database change deployment](/integrations/api/issue/) - -- [Embedded SQL Editor](/integrations/api/sql-editor/) diff --git a/docs/integrations/api/permission.mdx b/docs/integrations/api/permission.mdx deleted file mode 100644 index c6affb9a0..000000000 --- a/docs/integrations/api/permission.mdx +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Permission ---- - - - -| | | -| ---------------------- | --------------------------------------------------------------------------------------------------- | -| Workspace IAM Endpoint | [GET /v1/workspaces/-:getIamPolicy](/api-reference/workspaceservice/get-v1workspaces-:getIamPolicy) | -| Project IAM Endpoint | [GET /v1/projects/-:getIamPolicy](/api-reference/projectservice/get-v1projects-:getIamPolicy) | -| Role Endpoint | [GET /v1/roles](/api-reference/roleservice/get-v1roles) | -| User Endpoint | [GET /v1/users](/api-reference/userservice/get-v1users) | -| Group Endpoint | [GET /v1/groups](/api-reference/groupservice/get-v1groups) | - -Bytebase employs RBAC and has 2 permission levels: **Workspace Level** and **Project Level**. **Permissions** are granted to **Roles**, and **Roles** are assigned to **Users** and **Groups**. Permission details such as expiration -time are stored in [CEL (Common Expression Language)](https://cel.dev). - -If you have custom reporting requirements, you can call the Bytebase permission related API. diff --git a/docs/integrations/api/plan.mdx b/docs/integrations/api/plan.mdx deleted file mode 100644 index 133346fe3..000000000 --- a/docs/integrations/api/plan.mdx +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Plan ---- - -import TaskRunOrder from '/snippets/tutorials/task-run-order.mdx'; - -| | | -| ------------- | ------------------------------------------- | -| Plan Endpoint | [POST /v1/projects/-/plans](/api-reference/planservice/post-v1projects-plans) | -| Sheet Endpoint | [POST /v1/projects/-/sheets](/api-reference/sheetservice/post-v1projects-sheets) | - -`Plan` contains one or multiple change statements and dictate how they are grouped and ordered. The plan layouts how to execute the change statements. - -- **Sheet**. Plan references change statements via the `Sheet` object. Each `Sheet` contains one or more change statements. -- **Step**. Plan orchestrates the order via `Step`. Each `Step` specifies one or more changes units. A change unit specifies the SQL statements via `Sheet` and the target database. - -Code sample: https://github.com/bytebase/create-plan-from-release-action/blob/main/src/main.ts - -Inside the plan, create one or more sheets if needed. Then you orchestrate the order via Steps: - -```json -{ - "steps": [ - { - "title": "step 1", - "specs": [ - { - "earliestAllowedTime": null, - "id": "083c1c01-a0a6-485d-ae60-d6de2760ca4f", - "dependsOnSpecs": [], - "changeDatabaseConfig": { - "target": "instances/oracle/databases/db1", - "sheet": "projects/sample/sheets/741", - "type": "MIGRATE", - "schemaVersion": "", - "ghostFlags": {}, - "preUpdateBackupDetail": { - "database": "" - } - } - }, - { - "earliestAllowedTime": null, - "id": "faa54bb9-0bb3-42bf-aa10-cffc73e19e33", - // Wait for the previous task to finish - "dependsOnSpecs": ["083c1c01-a0a6-485d-ae60-d6de2760ca4f"], - "changeDatabaseConfig": { - "target": "instances/oracle/databases/db1", - "sheet": "projects/sample/sheets/742", - "type": "MIGRATE", - "schemaVersion": "", - "ghostFlags": {}, - "preUpdateBackupDetail": { - "database": "" - } - } - } - ] - }, - { - "title": "step 2", - "specs": [{...}, {...}] - } - ] -} -``` - -Each spec corresponds to a task. A task is a single change unit. Tasks run in the following order: - - - -If you want to enforce strict running order inside a step/stage. You can specify `dependsOnSpecs` with the previous task. diff --git a/docs/integrations/api/release.mdx b/docs/integrations/api/release.mdx deleted file mode 100644 index 2d3f61a2a..000000000 --- a/docs/integrations/api/release.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Release ---- - -| | | -| --------------- | --------------------------------------------- | -| Endpoint | [POST /v1/projects/-/releases](/api-reference/releaseservice/post-v1projects-releases) | -| Example | https://github.com/bytebase/create-release-action/blob/main/src/main.ts | diff --git a/docs/integrations/api/rollout.mdx b/docs/integrations/api/rollout.mdx deleted file mode 100644 index da6063862..000000000 --- a/docs/integrations/api/rollout.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Rollout ---- - -| | | -| --------------- | --------------------------------------------- | -| Rollout Endpoint | [POST /v1/projects/-/rollouts](/api-reference/rolloutservice/post-v1projects-rollouts) | -| Example | https://github.com/bytebase/rollout-action/blob/main/src/main.ts | diff --git a/docs/integrations/api/sql-editor.mdx b/docs/integrations/api/sql-editor.mdx deleted file mode 100644 index 0e13c3a89..000000000 --- a/docs/integrations/api/sql-editor.mdx +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: SQL Editor ---- - - - -You can configure [database permissions](/integrations/api/permission) and masking policies via API and embed -Bytebase SQL Editor into your own internal web portal. diff --git a/docs/integrations/api/sql-review.mdx b/docs/integrations/api/sql-review.mdx deleted file mode 100644 index 7e2597b5a..000000000 --- a/docs/integrations/api/sql-review.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: SQL Review API ---- - - - - -The SQL Review API provides SQL checks based on your schema review policy. - - - -Before you start, you should configure the [SQL Review Policy](/sql-review/review-policy). - - - -## Batch API (recommended) - -| | | -| -------- | ---------------------------------------------------------------------------------------------------------------------------- | -| Endpoint | [POST /v1/projects/-/releases:check](/api-reference/releaseservice/post-v1projects-releases:check) | -| Example | https://github.com/bytebase/example-api/tree/main/sql-review#batch-api-recommended | - -Batch API allows you to validate multiple statements across multiple databases in a single API call. - -You should use batch API for the GitOps workflow. Because a single PR or MR may contain multiple dependent -migration files (e.g., `1_create_t1_table.sql`, `2_create_t1_index.sql`). By retaining the context from -earlier files, the Batch API ensures each subsequent file is validated accurately. - -## Simple API - -| | | -| -------- | -------------------------------------------------------------------------------- | -| Endpoint | [POST /v1/sql/check](/api-reference/sqlservice/post-v1sqlcheck) | -| Example | https://github.com/bytebase/example-api/tree/main/sql-review#simple-api | - -Simple API allows you to validate a single statement against a single database. diff --git a/docs/integrations/jira.mdx b/docs/integrations/jira.mdx deleted file mode 100644 index b26e03706..000000000 --- a/docs/integrations/jira.mdx +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Jira ---- - -## Custom integration via API - - - - diff --git a/docs/integrations/neon.mdx b/docs/integrations/neon.mdx deleted file mode 100644 index 910825513..000000000 --- a/docs/integrations/neon.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: Neon ---- - -[Neon](https://neon.tech/) is a fully managed serverless PostgreSQL that offers cool features such as [database branching](https://neon.tech/docs/introduction/branching/), and bottomless storage. - -You can create a PostgreSQL instance on Neon and use Bytebase to manage the database development lifecycle for those databases. - -While Neon already has developer-oriented features like branching, Bytebase adds extra value to offer a systematic database development and change workflow. This is especially useful for cross-functional teams requiring collaboration (e.g. having dedicated DBA or platform engineering team apart from the application development teams). - -## Prerequisites - -- [Sign up](https://console.neon.tech/) for Neon, you can sign in with a Github or Google account. -- After signing in, you are directed to the Neon Console where you can [set up your project](https://neon.tech/docs/get-started-with-neon/setting-up-a-project/). - -## Procedure - -### Step 1 - Set up project on Neon and collect connection info - -Once you have set up your project, an `.env` file will be generated that contains the connection details for your Neon databases, download the file (and make sure to keep it safe!). - -![neon-project-details](/content/docs/integrations/neon/neon-project-details.webp) - -Within the file, we will need the following details to establish connection with Bytebase. - -![env-file-details](/content/docs/integrations/neon/neon-env-file-details.webp) - -### Step 2 - Add Neon database instance to Bytebase - -From your Bytebase **Create Instance** page, choose **Add Instance**, fill in the details to create the connection as follows: - -1. **Database:** `PostgreSQL`. -2. **Instance Name:** any name of your choosing, e.g. `neon-bb`. -3. **Environment:** `Prod` or `Test` (select the environment you want to add the instance to). -4. **Host or Socket:** the **PGHOST** from the `.env` file -5. **Port:** 5432 (Neon uses the default PostgreSQL port of 5432 to connect) -6. **Username:** copy the **PGUSER** from the `.env` file -7. **Password:** copy the **PGPASSWORD** from the `.env` file -8. **Database:** copy the **PGDATABASE** from the `.env` file - -![neon-bytebase-create-instance](/content/docs/integrations/neon/neon-bytebase-create-instance.webp) - -See [Add an Instance](/get-started/step-by-step/add-an-instance) for more details. - -### Step 3 - Check if the database instance is properly imported - -All databases should be synced properly. Expect some delay if the database instance is large. - -![neon-bytebase-instance](/content/docs/integrations/neon/neon-bytebase-instance.webp) - -So should the tables under the databases. - -![neon-bytebase-database](/content/docs/integrations/neon/neon-bytebase-database.webp) diff --git a/docs/integrations/prisma.mdx b/docs/integrations/prisma.mdx deleted file mode 100644 index 6241e3422..000000000 --- a/docs/integrations/prisma.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: Prisma ---- - -[Prisma Postgres](https://www.prisma.io/postgres) is a fully managed serverless PostgreSQL that focuses on performance, with built-in connection pooling. - -You can create a Prisma Postgres instance and use Bytebase to manage the database development lifecycle for those databases. - -While Prisma Postgres already has developer-oriented features like branching, Bytebase adds extra value to offer a systematic database development and change workflow. This is especially useful for cross-functional teams requiring collaboration (e.g. having dedicated DBA or platform engineering team apart from the application development teams). - -## Prerequisites - -- [Sign up](https://console.prisma.io/) for Prisma Postgres, you can sign in with a Github Google account, or email and password. -- After signing in, you are directed to the Prisma Console where you can [set up your project](https://www.prisma.io/docs/postgres/). - -## Procedure - -### Step 1 - Set up project on Prisma Postgres and collect connection info - -Once you have set up your project, click "Connect" button in the Connect Database card and copy the connection string that is generated for you. - - -![prisma-project-setup](/content/docs/integrations/prisma/ppg-project-connection.webp) - -With this connection string, we have the following information represented - -``` -postgresql://[username]:[password]@[host][:port]/[dbname][?param1=value1¶m2=value2] -``` - -### Step 2 - Add Prisma Postgres instance to Bytebase - -From your Bytebase **Create Instance** page, choose **Add Instance**, fill in the details to create the connection as follows: - -1. **Database:** `PostgreSQL`. -2. **Instance Name:** any name of your choosing, e.g. `ppg-bb`. -3. **Environment:** `Prod` or `Test` (select the environment you want to add the instance to). -4. **Host or Socket:** the **host** section of the connection string: `db.prisma.io` -5. **Port:** 5432 (Prisma Postgres uses the default PostgreSQL port of 5432 to connect) -6. **Username:** copy the **username** section of the connection string -7. **Password:** copy the **password** section of the connection string -8. **Database:** copy the **dbname** section of the connection string: `postgres` - -![ppg-bytebase-create-instance](/content/docs/integrations/prisma/ppg-create-instance.webp) - -See [Add an Instance](/get-started/step-by-step/add-an-instance) for more details. - -### Step 3 - Check if the database instance is properly imported - -All databases should be synced properly. Expect some delay if the database instance is large. - -So should the tables under the databases. - -![ppg-bytebase-database](/content/docs/integrations/prisma/ppg-bytebase-database.webp) diff --git a/docs/integrations/render.mdx b/docs/integrations/render.mdx deleted file mode 100644 index 9c3f84962..000000000 --- a/docs/integrations/render.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Render ---- - -[Render](https://render.com/) is a hosting service that allows you to deploy almost anything to the cloud, including static sites, web apps, Dockerfiles, APIs. And that includes [PostgreSQL databases](https://render.com/docs/databases). - -You can create PostgreSQL databases on Render and use Bytebase to manage the database development lifecycle for those databases. - -## Prerequisites - -- [Sign up](https://dashboard.render.com/) for a Render account. -- After signing up, [create](https://dashboard.render.com/new/database) a PostgreSQL database. Feel free to choose the [Free Plan](https://render.com/docs/free), but note that free databases will expire in 90 days and Render will delete them if not upgraded. - -## Procedure - -### Step 1 - Add Render database instance to Bytebase - -Visit your Render dashboard and click on the database you created. We will need the connection details from this page. - -![render-database-connections](/content/docs/integrations/render/render-database-connections.webp) - -From your Bytebase **Create Instance** page, choose **Add Instance**, fill in the details to create the connection as follows: - -1. **Database:** `PostgreSQL`. -2. **Instance Name:** any name of your choosing, e.g. `render-db`. -3. **Environment:** `Prod` or `Test` (select the environment you want to add the instance to). -4. **Host or Socket:** from your Render database page, copy the **External Database URL** to your text editor. The URL will look like `postgres://:@:/`. Copy the `:` part to the Host or Socket field. -5. **Port:** 5432 (Render uses the default PostgreSQL port of 5432 to connect) -6. **Username:** copy the **Username** from your Render database page. -7. **Password:** copy the **Password** from your Render database page. -8. **Database:** copy the **Database** name from your Render database page. - -![render-bytebase-create-instance](/content/docs/integrations/render/render-bytebase-create-instance.webp) - -See [Add an Instance](/get-started/step-by-step/add-an-instance) for more details. - -### Step 2 - Check if the database instance is properly imported - -All databases should be synced properly. Expect some delay if the database instance is large. - -![render-bytebase-instance](/content/docs/integrations/render/render-bytebase-instance.webp) - -So should the tables under the databases. - -![render-bytebase-database](/content/docs/integrations/render/render-bytebase-database.webp) diff --git a/docs/integrations/slack.mdx b/docs/integrations/slack.mdx deleted file mode 100644 index 5e1eb426e..000000000 --- a/docs/integrations/slack.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Slack ---- - -## Built-in webhook - -- [IM Integration](/change-database/webhook/#slack) - -## Custom integration via API - - diff --git a/docs/integrations/supabase.mdx b/docs/integrations/supabase.mdx deleted file mode 100644 index 1f7eb42b2..000000000 --- a/docs/integrations/supabase.mdx +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: Supabase ---- - -[Supabase](https://supabase.com/) is an open source Firebase alternative providing all the backend features you need to build a product. You can use it completely, or just the features you need. - -As Supabase provides full [PostgreSQL database instances](https://supabase.com/docs/guides/database), teams can use Bytebase to manage the database development lifecycle for the Supabase databases. - -While Supabase already has an easy-to-use GUI to conduct database operations, Bytebase adds the extra value to offer a systematic database development and change workflow. This is especially useful for cross-functional teams requiring collaboration (e.g. having dedicated DBA or platform engineering team apart from the application development teams). - -## Prerequisites - -- You need a [Supabase](https://supabase.com/) account (free signup). -- After signup, create a supabase project, you can start with the Free plan which already includes a full PostgreSQL instance. - -## Procedure - -### Step 1 - Visit Supabase project's database setting - -![supabase-database-setting](/content/docs/integrations/supabase/supabase-database-setting.webp) - -Note down the `Host` and `Port` info. For `User` and `Password`, we recommend creating a dedicated user for Bytebase instead of using the default postgres user. - -### Step 2 - Create a database user for Bytebase - -Visit Supabase SQL Editor and create a database user and grants it SUPERUSER role. Below example creates a user named "bytebase". - -![supabase-sql-editor](/content/docs/integrations/supabase/supabase-sql-editor.webp) - -### Step 3 - Add the Supabase database instance to Bytebase - -Choose `PostgreSQL`, and copy the `Host`, `Port`, `User` and `Password` from the last two steps to the form and click "Create". See [Add an Instance](/get-started/step-by-step/add-an-instance) for more details. - -![supabase-bytebase-create-instance](/content/docs/integrations/supabase/supabase-bytebase-create-instance.webp) - -### Step 4 - Check the database instance is properly imported - -All databases should be sycned properly. Expect some delay if the database instance is large. - -![supabase-bytebase-instance](/content/docs/integrations/supabase/supabase-bytebase-instance.webp) - -So should the tables under the databases. - -![supabase-bytebase-database](/content/docs/integrations/supabase/supabase-bytebase-database.webp)