英文:
How to merge csv files based on unique columns in java
问题
我有三个不同的非常大的文件,我的要求是将它们合并成一个单独的CSV文件。这三个文件都有不同数量的列,但它们之间包含一些唯一的列。
**示例**
文件1:
学生编号 学生姓名 班级编号
1 Ajay 6
2 Vinay 8
3 Geeta 6
4 Sameer 7
文件2:
班级编号 班级颜色
6 蓝色
7 灰色
8 白色
文件3:
房屋编号 房屋名称 学生编号
1 骑士 4
2 斯巴达人 1
3 武士 2
4 特洛伊人 3
正如您所看到的,文件1和文件2共有班级编号,文件1和文件3共有学生编号。根据这些信息,我想要通过基于共同键的连接来获得以下结果。
学生编号 学生姓名 班级编号 班级颜色 房屋编号 房屋名称
1 Ajay 6 蓝色 2 斯巴达人
2 Vinay 8 白色 3 武士
3 Geeta 6 蓝色 4 特洛伊人
4 Sameer 7 灰色 1 骑士
这种情况对我来说是全新的,我进行了一些研究,但在Java中没有找到解决方案。
英文:
I have 3 different very large files and my requirement is to merge them together in one single csv file. All these 3 files have different number of columns but they contain some unique column between each other.
Example
file 1:
StudentId StudentName ClassId
1 Ajay 6
2 Vinay 8
3 Geeta 6
4 Sameer 7
file 2:
ClassId ClassColor
6 Blue
7 Grey
8 White
file 3:
HouseId HouseName StudentId
1 Knights 4
2 Spartans 1
3 Samurai 2
4 Trojans 3
As you can see file1 and file2 has classId in common And file1 and file2 has studentId in common.
Now based on this information I want have result like this by joining files based on common key.
StudentId StudentName ClassId ClassColor HouseId HouseName
1 Ajay 6 Blue 2 Spartans
2 Vinay 8 White 3 Samurai
3 Geeta 6 Blue 4 Trojans
4 Sameer 7 Grey 1 Knights
This scenario is completely new to me, I did some research but didn't find solution for it in java.
答案1
得分: 0
以下是您提供的代码的中文翻译部分:
用于保存CSV文件数据的类:
import java.util.LinkedHashMap;
import java.util.Map;
public class CsvVo {
private Map<String, String> keyVal;
public CsvVo(String id) {
keyVal = new LinkedHashMap<>();// 如果不需要保持顺序,也可以使用HashMap
}
public Map<String, String> getKeyVal() {
return keyVal;
}
public void setKeyVal(Map<String, String> keyVal) {
this.keyVal = keyVal;
}
public void put(String key, String val) {
keyVal.put(key, val);
}
public String get(String key) {
return keyVal.get(key);
}
}
用于读取CSV文件的CSV解析器:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Set;
public class CsvParser {
public static List<CsvVo> getRecodrsFromACsv(File file, List<String> keys) throws IOException {
BufferedReader br = new BufferedReader(new FileReader(file));
List<CsvVo> records = new ArrayList<>();
boolean isHeader = true;
String line = null;
while ((line = br.readLine()) != null) {
if (isHeader) {// 第一行是标题
isHeader = false;
continue;
}
CsvVo record = new CsvVo(file.getName());
String[] lineSplit = line.split(",");
for (int i = 0; i < lineSplit.length; i++) {
record.put(keys.get(i), lineSplit[i]);
}
records.add(record);
}
br.close();
return records;
}
public static List<String> getHeadersFromACsv(File file) throws IOException {
BufferedReader br = new BufferedReader(new FileReader(file));
List<String> headers = null;
String line = null;
while ((line = br.readLine()) != null) {
String[] lineSplit = line.split(",");
headers = new ArrayList<>(Arrays.asList(lineSplit));
break;
}
br.close();
return headers;
}
public static void writeToCsv(final File file, final Set<String> headers, final List<CsvVo> records)
throws IOException {
FileWriter csvWriter = new FileWriter(file);
// 写入标题
String sep = "";
String[] headersArr = headers.toArray(new String[headers.size()]);
for (String header : headersArr) {
csvWriter.append(sep);
csvWriter.append(header);
sep = ",";
}
csvWriter.append("\n");
// 在每一行写入记录
for (CsvVo record : records) {
sep = "";
for (int i = 0; i < headersArr.length; i++) {
csvWriter.append(sep);
csvWriter.append(record.get(headersArr[i]));
sep = ",";
}
csvWriter.append("\n");
}
csvWriter.flush();
csvWriter.close();
}
}
用于测试功能的主类:
import java.io.File;
import java.io.IOException;
import java.util.*;
public class CsvApplication {
public static void main(String[] args) throws IOException {
File csv1 = new File("/Users/guru/Desktop/Standard.csv");
File csv2 = new File("/Users/guru/Desktop/Match.csv");
List<String> csv1Headers = CsvParser.getHeadersFromACsv(csv1);
List<String> csv2Headers = CsvParser.getHeadersFromACsv(csv2);
List<String> allCsvHeaders = new ArrayList<>();
allCsvHeaders.addAll(csv1Headers);
allCsvHeaders.addAll(csv2Headers);
Set<String> uniqueHeaders = new HashSet<>(allCsvHeaders);
List<CsvVo> csv1Records = CsvParser.getRecodrsFromACsv(csv1, csv1Headers);
List<CsvVo> csv2Records = CsvParser.getRecodrsFromACsv(csv2, csv2Headers);
List<CsvVo> allCsvRecords = new ArrayList<>();
String key = "StudentID";
getUniqueRecordsForKey(key, csv1Records, csv2Records, allCsvRecords);
CsvParser.writeToCsv(new File("/Users/guru/Desktop/Output.csv"), uniqueHeaders, allCsvRecords);
}
private static void getUniqueRecordsForKey(String key, List<CsvVo> csv1Records, List<CsvVo> csv2Records, List<CsvVo> allCsvRecords) {
for (CsvVo record1 : csv1Records) {
for (CsvVo record2 : csv2Records) {
if (!record1.getKeyVal().get(key).isEmpty() && record1.getKeyVal().get(key).equals(record2.getKeyVal().get(key))) {
HashMap<String, String> mergedMap = new LinkedHashMap();
CsvVo mergedRecord = new CsvVo(record2.getKeyVal().get(key));
mergeRecords(mergedMap, record1, record2);
mergedRecord.setKeyVal(mergedMap);
csv2Records.remove(record2);
allCsvRecords.add(mergedRecord);
break;
}
}
}
}
private static void mergeRecords(HashMap mergedMap, CsvVo record1, CsvVo record2) {
mergedMap.putAll(record1.getKeyVal());
mergedMap.putAll(record2.getKeyVal());
}
}
请注意,这只是代码的翻译部分,不包括问题的回答。如果您有其他需要,请随时提问。
英文:
At this stage I am able to join two files with below implementation:
Class to hold csv file data:
import java.util.LinkedHashMap;
import java.util.Map;
public class CsvVo {
private Map<String, String> keyVal;
public CsvVo(String id) {
keyVal = new LinkedHashMap<>();// you may also use HashMap if you don't need to keep order
}
public Map<String, String> getKeyVal() {
return keyVal;
}
public void setKeyVal(Map<String, String> keyVal) {
this.keyVal = keyVal;
}
public void put(String key, String val) {
keyVal.put(key, val);
}
public String get(String key) {
return keyVal.get(key);
}
}
CSV parser to read files:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Set;
public class CsvParser {
public static List<CsvVo> getRecodrsFromACsv(File file, List<String> keys) throws IOException {
BufferedReader br = new BufferedReader(new FileReader(file));
List<CsvVo> records = new ArrayList<>();
boolean isHeader = true;
String line = null;
while ((line = br.readLine()) != null) {
if (isHeader) {// first line is header
isHeader = false;
continue;
}
CsvVo record = new CsvVo(file.getName());
String[] lineSplit = line.split(",");
for (int i = 0; i < lineSplit.length; i++) {
record.put(keys.get(i), lineSplit[i]);
}
records.add(record);
}
br.close();
return records;
}
public static List<String> getHeadersFromACsv(File file) throws IOException {
BufferedReader br = new BufferedReader(new FileReader(file));
List<String> headers = null;
String line = null;
while ((line = br.readLine()) != null) {
String[] lineSplit = line.split(",");
headers = new ArrayList<>(Arrays.asList(lineSplit));
break;
}
br.close();
return headers;
}
public static void writeToCsv(final File file, final Set<String> headers, final List<CsvVo> records)
throws IOException {
FileWriter csvWriter = new FileWriter(file);
// write headers
String sep = "";
String[] headersArr = headers.toArray(new String[headers.size()]);
for (String header : headersArr) {
csvWriter.append(sep);
csvWriter.append(header);
sep = ",";
}
csvWriter.append("\n");
// write records at each line
for (CsvVo record : records) {
sep = "";
for (int i = 0; i < headersArr.length; i++) {
csvWriter.append(sep);
csvWriter.append(record.get(headersArr[i]));
sep = ",";
}
csvWriter.append("\n");
}
csvWriter.flush();
csvWriter.close();
}
}
Main Class to test the functionality:
import java.io.File;
import java.io.IOException;
import java.util.*;
public class CsvApplication {
public static void main(String[] args) throws IOException {
File csv1 = new File("/Users/guru/Desktop/Standard.csv");
File csv2 = new File("/Users/guru/Desktop/Match.csv");
List<String> csv1Headers = CsvParser.getHeadersFromACsv(csv1);
//csv1Headers.forEach(h -> System.out.print(h + " "));
//System.out.println();
List<String> csv2Headers = CsvParser.getHeadersFromACsv(csv2);
//csv2Headers.forEach(h -> System.out.print(h + " "));
//System.out.println();
List<String> allCsvHeaders = new ArrayList<>();
allCsvHeaders.addAll(csv1Headers);
allCsvHeaders.addAll(csv2Headers);
//allCsvHeaders.forEach(h -> System.out.print(h + " "));
//System.out.println();
Set<String> uniqueHeaders = new HashSet<>(allCsvHeaders);
//uniqueHeaders.forEach(h -> System.out.print(h + " "));
//System.out.println();
List<CsvVo> csv1Records = CsvParser.getRecodrsFromACsv(csv1, csv1Headers);
List<CsvVo> csv2Records = CsvParser.getRecodrsFromACsv(csv2, csv2Headers);
List<CsvVo> allCsvRecords = new ArrayList<>();
String key = "StudentID";
getUniqueRecordsForKey(key, csv1Records, csv2Records,allCsvRecords);
// allCsvRecords.addAll(csv1Records);
//allCsvRecords.addAll(csv2Records);
CsvParser.writeToCsv(new File("/Users/guru/Desktop/Output.csv"), uniqueHeaders, allCsvRecords);
}
private static void getUniqueRecordsForKey(String key, List<CsvVo> csv1Records, List<CsvVo> csv2Records, List<CsvVo> allCsvRecords) {
for (CsvVo record1: csv1Records){
for(CsvVo record2: csv2Records){
if( !record1.getKeyVal().get(key).isEmpty() && record1.getKeyVal().get(key).equals(record2.getKeyVal().get(key))){
HashMap<String ,String> mergedMap = new LinkedHashMap();
CsvVo mergedRecord = new CsvVo(record2.getKeyVal().get(key));
mergeRecords(mergedMap, record1,record2);
mergedRecord.setKeyVal(mergedMap);
csv2Records.remove(record2);
allCsvRecords.add(mergedRecord);
break;
}
}
}
}
private static void mergeRecords(HashMap mergedMap,CsvVo record1, CsvVo record2 ){
mergedMap.putAll(record1.getKeyVal());
mergedMap.putAll(record2.getKeyVal());
}
}
Here we are loading entire data of files to lists and then comparing data based on the column we want to join. In this case i have taken StudentID.
Not sure if this is good approach for big files.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论