英文:
Batch fetching into @ElementCollection
问题
以下是翻译好的内容:
我有以下带有持久化集合的实体:
@Entity
@Table(name = "A")
public class A implements Identifiable<Long> {
@Id
private Long id;
@ElementCollection
@CollectionTable(name = "B", joinColumns = { @JoinColumn(name = "B_ID") })
private Collection<B> bList;
@ElementCollection
@CollectionTable(name = "C", joinColumns = { @JoinColumn(name = "C_ID") })
private Collection<C> cList;
}
在加载了1万行A实体之后,我也想加载它的集合:
// 加载A实体
final List<A> aList = getA();
// 循环遍历1万个结果
for (final A a : aList) {
final List<B> bList = a.getB();
final List<C> cList = a.getC();
}
生成了相当多的选择语句(约1万个)。
性能非常差!
有什么想法在这里使用批量选择吗?
英文:
I have following entity with its persistent collection
@Entity
@Table(name = "A")
public class A implements Identifiable<Long> {
@Id
private Long id;
@ElementCollection
@CollectionTable(name = "B", joinColumns = { @JoinColumn(name = "B_ID") })
private Collection<B> bList;
@ElementCollection
@CollectionTable(name = "C", joinColumns = { @JoinColumn(name = "C_ID") })
private Collection<C> cList;
}
After loading 10k rows A entities, I want to load its collection as well
// loading A entities
final List<A> aList = getA();
// looping from 10k results
for (final A a : aList) {
final List<B> bList = a.getB();
final List<C> cList = a.getC();
}
And select statement generated quite a lot (~10k).
Very poor performance here!
Any idea to work with batch select here?
答案1
得分: 0
已解决!
思路
使用 @ElementCollection 时,Hibernate 将处理 SQL 语句和实体映射到实体列表值。这样使用起来很方便,但是我们需要权衡一下。父结果越多,性能就越差。如果有 1 万条父记录,Hibernate 将执行 1 万次查询以获取其子关系。
不必为每个单独的父项加载子项。创建本地查询以加载所有内容。
我们得到的结果如下:
PARENT_ID CHILD_ID
1 1
1 2
1 3
2 1
2 2
3 3
然后实现 Hibernate 转换器将这些原始数据库对象转换为 DTO。
代码示例
创建 DTO
public class ADto {
private long id;
private Collection<BDto> bList = new HashSet<>();
// 构造函数
public void addChildren(BDto b) {
bList.add(b);
}
// equals 和 hascode
}
public class BDto {
private long id;
// 构造函数
// equals 和 hascode
}
以及转换器
public class CustomTransformer extends AliasedTupleSubsetResultTransformer {
private final Map<Long, ADto> result = new HashMap<>();
private final Map<String, Integer> aliasIndexes = new HashMap<>();
@Override
public List transformList(final List list) {
return new ArrayList(new LinkedHashSet(list));
}
@Override
public UsableCapacity transformTuple(final Object[] tuple, final String[] aliases) {
init(aliases);
final A aEntity = (A) get(tuple, "parent"); // 与 DAO 层中的别名相同
final B bEntity = (B) get(tuple, "child"); // 与 DAO 层中的别名相同
final Long id = aEntity.getId();
final ADto aDto;
if (result.containsKey(id)) {
aDto = result.get(id);
} else {
aDto = new ADto(...);
}
aDto.addChildren(new BDto(...)); // 从 BEntity 创建 BDto 实例
result.put(id, aDto);
return aDto;
}
private Object get(final Object[] capacities, final String alias) {
return capacities[aliasIndexes.get(alias)];
}
private void init(final String[] aliases) {
if (aliasIndexes.isEmpty()) {
for (int i = 0; i < aliases.length; i++) {
final String alias = aliases[i];
aliasIndexes.put(alias, i);
}
}
}
}
DAO 层
final String queryString = "SELECT {parent.*}, {child.*} FROM A parent LEFT JOIN B child ON parent.id = child.parent_id";
final NativeQuery query = getCurrentSession().createNativeQuery(queryString)
.addEntity("parent", A.class)
.addEntity("child", B.class);
// 待办事项
query.setResultTransformer(new CustomTransformer());
return safeList(query);
英文:
I have solved this!
IDEA
Hibernate will take care of the sql statement and mapping to entity list value when using @ElementCollection. That’s comfortable to use but we have a trade off.
The more parent results we have, the worse performance we got. If we have 10k records parent, Hibernate will do selecting 10k times to fetch its children relation.
Instead of loading children for every single parent. Create native query to load everything.
we got the results like this:
PARENT_ID CHILD_ID
1 1
1 2
1 3
2 1
2 2
3 3
then implementing Hibernate transformer to convert these raw database objects to DTO.
Code example.
Create DTO
public class ADto {
private long id;
private Collection<BDto> bList = new HashSet<>();
// Constructor
public void addChildren(BDto b) {
bList.add(b);
}
//equals and hascode
}
public class BDto {
private long id;
// Constructor
//equals and hascode
}
And transformer
public class CustomTransformer extends AliasedTupleSubsetResultTransformer {
private final Map<Long, ADto> result = new HashMap<>();
private final Map<String, Integer> aliasIndexes = new HashMap<>();
@Override
public List transformList(final List list) {
return new ArrayList(new LinkedHashSet(list));
}
@Override
public UsableCapacity transformTuple(final Object[] tuple, final String[] aliases) {
init(aliases);
final A aEntity = (A) get(tuple, "parent"); // same as alias in DAO layer
final B bEntity = (B) get(tuple, "child"); // same as alias in DAO layer
final Long id = aEntity.getId();
final ADto aDto;
if (result.containsKey(id)) {
aDto = result.get(id);
} else {
aDto = new ADto(...);
}
aDto.addChildren(new BDto(...)); // create BDto instance from BEntity
result.put(id, aDto);
return aDto;
}
private Object get(final Object[] capacities, final String alias) {
return capacities[aliasIndexes.get(alias)];
}
private void init(final String[] aliases) {
if (aliasIndexes.isEmpty()) {
for (int i = 0; i < aliases.length; i++) {
final String alias = aliases[i];
aliasIndexes.put(alias, i);
}
}
}
}
DAO layer
final String queryString = "SELECT {parent.*}, {child.*} FROM A parent LEFT JOIN B child ON parent.id = child.parent_id";
final NativeQuery query = getCurrentSession().createNativeQuery(queryString)
.addEntity("parent", A.class)
.addEntity("child", B.class);
// Todo
query.setResultTransformer(new CustomTransformer());
return safeList(query);
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论