Azure Synapse Serverless上的复制数据库出现错误

huangapple go评论65阅读模式
英文:

Error with a replicated database on Azure Synapse Serverless

问题

以下是翻译好的部分:

"我正在尝试将登录用户添加到 Azure Synapse Serverless 上的复制数据库。

我创建了一个新的登录用户:

CREATE LOGIN <Name> WITH PASSWORD = 'Password';
CREATE USER <Name> FOR LOGIN <Name>;

现在我可以访问数据库并查看所有表格。但是,当我尝试查询表格时,出现以下错误:

无法找到凭据 'https://.......',因为它不存在或您没有权限。

然后,我创建了一个作用域凭据:

CREATE DATABASE SCOPED CREDENTIAL [SasToken]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sp=XXXX';

现在,当我尝试执行以下操作时出现错误:

GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[SasToken] TO [Name];

错误消息显示:

不允许在复制数据库上执行 GRANT 操作。

如何解决这个问题,我只需要适用于 Serverless 的解决方案。提前感谢您的帮助。"

英文:

I'm trying to add a login/User to a replicated database on Azure Synapse Serverless.

I created a new login/User

CREATE LOGIN &lt;Name&gt; WITH PASSWORD = &#39;Password&#39;;
CREATE USER &lt;Name&gt; FOR LOGIN &lt;Name&gt;;

Now I can access the db and see all tables. But when I tried to query a table I get:

> Cannot find the CREDENTIAL 'https://....... , because it does not exist or you do not have permission.

Then I created a scoped credential:

CREATE DATABASE SCOPED CREDENTIAL [SasToken]
WITH IDENTITY = &#39;SHARED ACCESS SIGNATURE&#39;,
SECRET = &#39;sp=XXXX&#39;;

Now I'm stuck with an error when I try:

GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[SasToken] TO [Name];

The message is saying

> Operation GRANT is not allowed for a replicated database.

How to solve this. I only need a solution that works with Serverless. Thank you on advanced.

答案1

得分: 2

如Bhavani在他们的回答中提到的,Lake数据库是通过Spark进行复制和管理的。我通过在Spark之外创建一个新的无服务器SQL数据库来解决了这个问题。

要做到这一点,你需要让Spark将你的表写入Azure Data Lake Gen 2。我选择使用Delta表。然后,你可以为托管服务标识创建数据库范围凭据,创建一个外部数据源,代表包含Delta表文件夹的根文件夹,创建一个角色(如果需要的话),并创建视图来封装对Delta表的OPENROWSET查询。

这一切都可以在内置的无服务器池中的SQL脚本中完成,或者通过Azure Data Studio或类似的客户端连接。

在主数据库上:

IF NOT EXISTS(SELECT * FROM sys.databases WHERE name = '[你的数据库名称]')
  CREATE DATABASE [你的数据库名称]

在你的新数据库上:

IF NOT EXISTS(SELECT * FROM sys.database_credentials WHERE name = 'ManagedIdentityCredential')
  BEGIN
    CREATE MASTER KEY ENCRYPTION BY PASSWORD='[主加密密码]'
                    
    CREATE DATABASE SCOPED CREDENTIAL ManagedIdentityCredential
       WITH IDENTITY = '托管服务标识'

    CREATE EXTERNAL DATA SOURCE [你的数据库名称]
       WITH (
         LOCATION = 'https://[存储帐户名称].dfs.core.windows.net/path_to_database_files_root/',
         CREDENTIAL = ManagedIdentityCredential
       );
                    
  END

为你的表创建视图:

CREATE OR ALTER VIEW [视图名称] AS
  SELECT
  {列}
  FROM
  OPENROWSET( BULK '/path_to_delta_table', DATA_SOURCE = '[你的数据库名称]', FORMAT='DELTA') AS Result

用于授予访问权限的角色:

IF NOT EXISTS (select * from sys.database_principals where type='R' and name = '[角色名称]')
    CREATE ROLE {角色名称}
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[ManagedIdentityCredential] to [角色名称]
GRANT SELECT ON SCHEMA::[dbo] TO [角色名称]

将你的数据库用户添加到该角色:

ALTER ROLE [角色名称] ADD MEMBER [用户名称]

希望这对你有帮助!在我的特定用例中,我已经将这个过程自动化为一个由管道执行的Python脚本,并使用约定自动生成了数据库文件夹根目录中的文件夹的视图。

英文:

As Bhavani mentioned in their answer, Lake databases are replicated from and managed by Spark. I got around this by instead creating a new database in Serverless SQL outside of Spark.

To do this, you'll need to have Spark write out your tables to Azure Data Lake Gen 2. I've chosen to use Delta tables. You can then create a database scoped credential for the Managed Service Identity, an external data source that represents the root folder which contains your Delta table folders, a role (if you like) to grant credential access to, and views to encapsulate the OPENROWSET queries on your Delta tables.

This can all be done in a SQL Script on the built-in serverless pool, or connected via Azure Data Studio or a similar client.

On the master database:

IF NOT EXISTS(SELECT * FROM sys.databases WHERE name = &#39;[your database name]&#39;)
  CREATE DATABASE[your database name]

On your new database:

IF NOT EXISTS(SELECT * FROM sys.database_credentials WHERE name = &#39;ManagedIdentityCredential&#39;)
  BEGIN
    CREATE MASTER KEY ENCRYPTION BY PASSWORD=&#39;[master encryption password]&#39;
                    
    CREATE DATABASE SCOPED CREDENTIAL ManagedIdentityCredential
       WITH IDENTITY = &#39;MANAGED SERVICE IDENTITY&#39;

    CREATE EXTERNAL DATA SOURCE [your_database_name]
       WITH (
         LOCATION = &#39;https://[storage_account_name].dfs.core.windows.net/path_to_database_files_root/&#39;,
         CREDENTIAL = ManagedIdentityCredential
       );
                    
  END

Create views for your tables:

CREATE OR ALTER VIEW [view_name] AS
  SELECT
  {columns}
  FROM
  OPENROWSET( BULK &#39;/path_to_delta_table&#39;, DATA_SOURCE = &#39;[your_database_name]&#39;, FORMAT=&#39;DELTA&#39;) AS Result

A role for granting access:

IF NOT EXISTS (select * from sys.database_principals where type=&#39;R&#39; and name = &#39;[role_name]&#39;)
    CREATE ROLE {role_name}
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[ManagedIdentityCredential] to [role_name]
GRANT SELECT ON SCHEMA::[dbo] TO [role_name]

Add your database user to that role:
ALTER ROLE [role_name] ADD MEMBER [user_name]

I hope this helps! In my particular use case I've automated this as a python script executed by the pipeline and used conventions to automatically generate views for folders within the database folder root.

答案2

得分: 0

我在我的环境中尝试复制您的问题

GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[SasToken] TO [Name];

我收到了与您相同的错误

根据此链接返回的信息,如果您尝试修改Lake数据库、创建外部表、外部数据源、数据库范围凭据或其他Lake数据库中的对象,则会返回上述错误。这些对象只能在SQL数据库中创建。

Lake数据库是从Apache Spark池复制并由Apache Spark管理的。因此,您不能使用T-SQL语言在数据湖数据库上创建类似于SQL数据库的对象。因此,无法执行这些类型的操作在数据湖数据库上。

英文:

I tried to replicate your issue in my environment

GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[SasToken] TO [Name];

I got same error which you got

Azure Synapse Serverless上的复制数据库出现错误

As per this above error is returned if you are trying to modify a Lake database, create external tables, external data sources, database scoped credentials or other objects in your Lake database. These objects can be created only on SQL databases.

The Lake databases are replicated from the Apache Spark pool and managed by Apache Spark. Therefore, you cannot create objects like in SQL Databases by using T-SQL language. So, it is not possible to perform these types of operations on data lake database.

huangapple
  • 本文由 发表于 2023年2月24日 02:03:54
  • 转载请务必保留本文链接:https://go.coder-hub.com/75548649.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定