pgloader – 如何将 longblob 导入为 oid?

huangapple go评论121阅读模式
英文:

pgloader - How to import a longblob as oid?

问题

"In a nutshell"部分:

如何使用pgloader将MySQL中的longblob迁移到Postgres,以便在列被注释为@Lob@Basic(fetch= FetchType.LAZY)的情况下,使Hibernate正常工作?

"Full story"部分:

所以我正在尝试将MySQL数据库迁移到Postgres。现在我正在尝试正确地迁移这张表:
pgloader – 如何将 longblob 导入为 oid?

我的当前pgloader脚本相当简单:

  1. LOAD DATABASE
  2. FROM mysql://foo:bar@localhost:3306/foobar
  3. INTO postgresql://foo:bar@localhost:5432/foobar
  4. CAST
  5. type int to integer drop typemod,
  6. type bigint with extra auto_increment to bigserial drop typemod,
  7. type bigint to bigint drop typemod
  8. ALTER TABLE NAMES MATCHING 'User' RENAME TO 'users'
  9. ALTER TABLE NAMES MATCHING ~/./ SET SCHEMA 'public'
  10. ;

这足以加载数据并使外键正常工作。

Postgres表如下所示:

pgloader – 如何将 longblob 导入为 oid?

但是,File是一个Java实体,其内容被注释为@Lob

  1. @Entity
  2. @Inheritance(strategy= InheritanceType.JOINED)
  3. public class File extends BaseEntity {
  4. @NotNull
  5. private String name;
  6. @Column
  7. @Size(max = 4096)
  8. private String description;
  9. @NotNull
  10. private String mimeType;
  11. @Lob
  12. @Basic(fetch= FetchType.LAZY)
  13. private transient byte[] content;
  14. ...
  15. }

这就是为什么应用程序在连接到迁移后的数据库时出错的原因:

  1. Schema-validation: wrong column type encountered in column [content] in table [File];
  2. found [bytea (Types#BINARY)], but expecting [oid (Types#BLOB)]

我如何使这个迁移工作?

我尝试设置

  1. spring.jpa.properties.hibernate.jdbc.use_streams_for_binary=false

如https://stackoverflow.com/questions/3677380/proper-hibernate-annotation-for-byte中建议的那样,但没有任何作用。

英文:

In a nutshell

How do you migrate a longblob from MySQL to Postgres using pgloader s.t. Hibernate is happy if the column is annotated @Lob and @Basic(fetch= FetchType.LAZY)?

Full story

So I'm migrating (or trying to, at least) a MySQL DB to postgres. And I'm now trying to move this table correctly:
pgloader – 如何将 longblob 导入为 oid?

My current pgloader script is fairly simple:

  1. LOAD DATABASE
  2. FROM mysql://foo:bar@localhost:3306/foobar
  3. INTO postgresql://foo:bar@localhost:5432/foobar
  4. CAST
  5. type int to integer drop typemod,
  6. type bigint with extra auto_increment to bigserial drop typemod,
  7. type bigint to bigint drop typemod
  8. ALTER TABLE NAMES MATCHING 'User' RENAME TO 'users'
  9. ALTER TABLE NAMES MATCHING ~/./ SET SCHEMA 'public'
  10. ;

This is sufficient to load the data and have the foreign keys working.

The postgres table looks like this:

pgloader – 如何将 longblob 导入为 oid?

The File, however, is a java entity and its content is annotated @Lob:

  1. @Entity
  2. @Inheritance(strategy= InheritanceType.JOINED)
  3. public class File extends BaseEntity {
  4. @NotNull
  5. private String name;
  6. @Column
  7. @Size(max = 4096)
  8. private String description;
  9. @NotNull
  10. private String mimeType;
  11. @Lob
  12. @Basic(fetch= FetchType.LAZY)
  13. private transient byte[] content;
  14. ...
  15. }

which is why the application fails to connect to the migrated database with error:

  1. Schema-validation: wrong column type encountered in column [content] in table [File];
  2. found [bytea (Types#BINARY)], but expecting [oid (Types#BLOB)]

How do I get this migration to work?

I did try setting

  1. spring.jpa.properties.hibernate.jdbc.use_streams_for_binary=false

as suggested in https://stackoverflow.com/questions/3677380/proper-hibernate-annotation-for-byte but that didn't do anything.

答案1

得分: 2

Here is the translated content:

嗯...我猜我可以按照https://stackoverflow.com/questions/55006781/migrate-postgresql-text-bytea-column-to-large-object中建议的方法,在事后创建数据块。

这意味着迁移脚本将得到扩展:

  1. mysql://foo:bar@localhost:3306/foobar加载数据库
  2. postgresql://foo:bar@localhost:5432/foobar
  3. 转换
  4. 将类型int转换为integer并删除typemod
  5. 将带有额外自动增量的类型bigint转换为bigserial并删除typemod
  6. 将类型bigint转换为bigint并删除typemod
  7. 将匹配名称'User'的表重命名为'users'
  8. 将匹配正则表达式~/./的表设置为模式'public'
  9. 加载后执行
  10. $$
  11. 将表file的列content重命名为content_bytes;
  12. $$
  13. $$
  14. 向表file添加列content OID;
  15. $$
  16. $$
  17. 更新file表,设置
  18. content = lo_from_bytea(0content_bytes::bytea),
  19. content_bytes = NULL
  20. ;
  21. $$
  22. $$
  23. 删除表file的列content_bytes
  24. $$
  25. ;
英文:

Hm ... I guess I can just create blobs after the fact, as suggested by https://stackoverflow.com/questions/55006781/migrate-postgresql-text-bytea-column-to-large-object

Meaning the migration script will get an extension:

  1. LOAD DATABASE
  2. FROM mysql://foo:bar@localhost:3306/foobar
  3. INTO postgresql://foo:bar@localhost:5432/foobar
  4. CAST
  5. type int to integer drop typemod,
  6. type bigint with extra auto_increment to bigserial drop typemod,
  7. type bigint to bigint drop typemod
  8. ALTER TABLE NAMES MATCHING 'User' RENAME TO 'users'
  9. ALTER TABLE NAMES MATCHING ~/./ SET SCHEMA 'public'
  10. AFTER LOAD DO
  11. $$
  12. ALTER TABLE file RENAME COLUMN content TO content_bytes;
  13. $$,
  14. $$
  15. ALTER TABLE file ADD COLUMN content OID;
  16. $$,
  17. $$
  18. UPDATE file SET
  19. content = lo_from_bytea(0, content_bytes::bytea),
  20. content_bytes = NULL
  21. ;
  22. $$,
  23. $$
  24. ALTER TABLE file DROP COLUMN content_bytes
  25. $$
  26. ;

huangapple
  • 本文由 发表于 2020年8月11日 21:46:26
  • 转载请务必保留本文链接:https://go.coder-hub.com/63359543.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定