优化LOB dblink的插入操作

huangapple go评论91阅读模式
英文:

Optimize insert operation LOB dblink

问题

我试图插入通过 db_link 访问的表的值,其中包含一个 LOB 列。但性能相当差。我尝试使用游标和批量收集,但似乎不适用于远程数据库中的 LOB。有没有其他优化的选项?

这是我的查询。在没有最后两个筛选条件的情况下可以工作,因为在插入查询中不允许对 LOB 列进行操作,但理想情况下我们希望在同一操作中包括它。

-- Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
-- PL/SQL Release 12.1.0.2.0 - Production
-- "CORE 12.1.0.2.0 Production"
-- TNS for Linux: Version 12.1.0.2.0 - Production
-- NLSRTL Version 12.1.0.2.0 - Production

INSERT INTO raw_data (
    "URL",
    tipo,
    fecha_evento,
    fecha_registro,
    usuario,
    cliente,
    contrato,
    referer,
    correlation_id,
    session_id,
    tracking_cookie,
    "JSON",
    "APPLICATION"
)
    SELECT
        r.*
    FROM
        schema.remote_table@dblink r
    WHERE
        fecha_evento BETWEEN trunc(SYSDATE - INTERVAL '1' MONTH, 'MONTH') AND trunc(SYSDATE, 'MONTH') - INTERVAL '1' DAY
        AND url IN (
            'a',
            'b',
            'c',
            'd'
        )   AND NOT JSON_EXISTS ( r."JSON", '$.response.pasofin.codoferta' )
        AND "URL" NOT IN (
            'e',
            'f'
        );

请注意,这是您的查询的翻译。

英文:

I am trying to do an insert of the values of a table accessible via db_link and containing a LOB column. However, the performance is pretty bad. I have tried using cursors and bulk collect but they don't seem to work with LOBs in remote databases. Is there any other option to optimize it?

This is my query. It works without the last two filter conditions, since operations on the LOB column are not allowed in the insert query, but ideally we would like to include it within the same operation.

-- Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
-- PL/SQL Release 12.1.0.2.0 - Production
-- "CORE	12.1.0.2.0	Production"
-- TNS for Linux: Version 12.1.0.2.0 - Production
-- NLSRTL Version 12.1.0.2.0 - Production


INSERT INTO raw_data (
    "URL",
    tipo,
    fecha_evento,
    fecha_registro,
    usuario,
    cliente,
    contrato,
    referer,
    correlation_id,
    session_id,
    tracking_cookie,
    "JSON",
    "APPLICATION"
)
    SELECT
        r.*
    FROM
        schema.remote_table@dblink r
    WHERE
        fecha_evento BETWEEN trunc(SYSDATE - INTERVAL '1' MONTH, 'MONTH') AND trunc(SYSDATE, 'MONTH') - INTERVAL '1' DAY
        AND url IN (
            'a',
            'b',
            'c',
            'd'
        )   AND NOT JSON_EXISTS ( r."JSON", '$.response.pasofin.codoferta' )
        AND "URL" NOT IN (
            'e',
            'f'
        );

答案1

得分: 1

LOB操作在跨数据库链接时长期以来一直存在问题,尽管在较新的版本中有所改进。在内部,当您通过INSERT SELECTCTAS复制LOB时,Oracle会为每一行执行内部的dbms_lob调用(或其下层的某种东西),以逐步获取字节并将它们写入目标LOB段。可以将其视为嵌套循环 - 对于每一行,都要执行一个内部循环,直到组装完一个LOB(并且对于每一行中的每个额外LOB都要再次执行相同操作)。显然,这比简单的行集直接拉取要多得多。每个LOB片段被复制到的这个缓冲区一直以来都相当小,因此需要进行多次往返,网络延迟会累积,因此拉取LOB会变得非常慢。人们希望能够控制缓冲区大小,以减少往返次数,但除非存在一个下划线参数可以实现这一点,否则我认为我们无法控制它。我注意到LOB的移动在最近的版本中有所改进,但我不记得它是否在12.1中有改进。

有几种加速此过程的解决方法。

  1. 拉取所有LOB长度小于4000字节的行,将它们转换为varchar2(4000),以便它们以varchar2的形式通过链接传输。在执行此操作时,所有使用dbms_lob的地方必须使用远程版本,而不是本地版本。由于字符集不匹配的问题,您可能需要将其更改为varchar2(2000)甚至varchar2(1000)。这非常快速。然后将较长的LOB作为LOB拉取,希望这样的行要少得多。如果大多数LOB小于4K,这可以极大地提速。但是,如果大多数LOB大于4K,您将无法获得任何好处。

  2. 您可以在远程数据库上创建一个过程,该过程接受列名和表名,并以确定的行顺序从所有行中读取LOB,将它们转换为varchar2(32767)记录的集合(然后可以将其转换为BLOB和一组RAW,如果需要,还可以进行lz_compressed压缩)。通过OUT参数返回集合,还有另一个字节偏移量的集合,显示每行LOB的起始位置。调用数据库接收输出集合,并在本地反转此过程,重新构建原始的LOB并将其写入目标。这相当复杂但有效(我曾成功执行过),比在数据库链接上进行本机LOB移动要快得多。但由于其复杂性和容易出现错误的风险,我不会强烈推荐这种方法。但这是一个选择。

  3. 使用多个进程(例如,通过dbms_scheduler)将表提取拆分成大致相等的部分,并以蛮力方式解决问题。最后,重新组装线程工作表(或更好地说,一个表的分区)以收集最终结果。10个线程几乎比单个会话快10倍地移动那些LOB(虽然会有两次写入数据的开销,但与并行化网络拉取的好处相比,这是非常小的)。这可以单独完成,也可以与上述其他技术结合使用。

英文:

LOB operations over dblinks have long been an issue, though they've improved in more recent versions. Internally when you copy a LOB over via INSERT SELECT or CTAS, Oracle will for each row do a loop on internal dbms_lob calls (or something underneath it) to incrementally fetch bytes into a buffer and write them to the target LOB segment. Think of it as a nested loop - for each row, do an inner loop as many times as it takes to assemble a LOB (and do it again for each additional LOB in the row). Clearly this is a lot more work than a straight pull of a simple rowset. This buffer that each LOB piece is copied into has historically been quite small, resulting in many round trips so that network latency adds up and it can really make pulling LOBs over quite slow. One wishes it were possible to control the buffer size so fewer round trips are necessary, but unless there's an underscore parameter that does this, I don't think we have any control over it. I have noticed that LOB movement has improved in recent versions, but I don't recall if it had yet in 12.1.

There are several workarounds to speed this up.

  1. Pull all rows with a LOB length of < 4000 bytes, casting them to varchar2(4000) so they are transferred over the link as varchar2. All uses of dbms_lob while doing this must use the @dblink remote versions, not the local ones. Due to characterset mismatch issues you might need to makes this varchar2(2000) or even varchar2(1000). This is very fast. Then pull the longer ones over as LOBs, which hopefully is far fewer rows. If most of your LOBs are less than 4K, this can really speed things up. If most of your LOBS are greater than 4K, however, you get no benefit.

  2. You can create a procedure on the remote database that takes a column name and a table name and reads the LOBs from all rows locally in a determined row order, converting them into a collection of varchar2(32767) records (this can then be converted to BLOB and a collection of RAW and then lz_compressed if desired). Return the collection via an OUT parameter along with another collection of byte offsets showing where each row's LOB starts. The calling database receives the output collections and reverses this process locally, reconstructing the original LOBs and writes them to the target. This is rather complicated but works (I've done it successfully) and this is a lot faster than native LOB movement over a dblink. Because of its complexity and liability to bugs however, it really isn't an approach I would highly recommend. But it is an option.

  3. Use multiple processes (e.g. via dbms_scheduler) to break up the table extract into roughly equal portions and tackle the problem that way with brute force. Reassemble thread work tables (or better, partitions of one table) at the end to collect your final result. 10 threads will move those LOBs almost 10x faster than a single session can (you do have the overhead of writing the data twice, but that's very small compared to the benefit of parallelizing the network pull). This can be done either by itself, or in combination with the other techniques mentioned above.

huangapple
  • 本文由 发表于 2023年8月9日 16:14:01
  • 转载请务必保留本文链接:https://go.coder-hub.com/76865803-2.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定