How to compress and decompress a file using lz4?

huangapple go评论90阅读模式
英文:

How to compress and decompress a file using lz4?

问题

我想使用Go语言中的lz4算法来压缩和解压文件。是否有可用的包来实现这个功能?我搜索到一个叫做https://github.com/pierrec/lz4的包。

我是Go语言的新手,不知道如何使用这个包来压缩和解压文件。

我需要使用这个包将文件压缩成二进制格式,并使用Go语言解压缩二进制文件还原为原始文件。

英文:

I want to compress and decompress a file using lz4 algorithm in Go. Is there any package available to do this?
I searched and found a package called
https://github.com/pierrec/lz4

I am new Go and I cannot figure out how to use this package to compress and decompress a file.

I need to use this package to compress a file to binary format and decompress the binary file to original file using Go.

答案1

得分: 5

我认为以下示例应该能够直接指导你正确的方向。这是使用github.com/pierrec/lz4包进行压缩和解压缩的最简单示例。

//compress project main.go
package main

import "fmt"
import "github.com/pierrec/lz4"

var fileContent = `CompressBlock compresses the source buffer starting at soffet into the destination one.
This is the fast version of LZ4 compression and also the default one.
The size of the compressed data is returned. If it is 0 and no error, then the data is incompressible.
An error is returned if the destination buffer is too small.`

func main() {
    toCompress := []byte(fileContent)
    compressed := make([]byte, len(toCompress))

    //compress
    l, err := lz4.CompressBlock(toCompress, compressed, 0)
    if err != nil {
        panic(err)
    }
    fmt.Println("compressed Data:", string(compressed[:l]))

    //decompress
    decompressed := make([]byte, len(toCompress))
    l, err = lz4.UncompressBlock(compressed[:l], decompressed, 0)
    if err != nil {
        panic(err)
    }
    fmt.Println("\ndecompressed Data:", string(decompressed[:l]))
}
英文:

I think blow example should direct you to correct direction. It is the simplest example of how to compress and decompress using github.com/pierrec/lz4 package.

//compress project main.go
package main

import "fmt"
import "github.com/pierrec/lz4"

var fileContent = `CompressBlock compresses the source buffer starting at soffet into the destination one.
This is the fast version of LZ4 compression and also the default one.
The size of the compressed data is returned. If it is 0 and no error, then the data is incompressible.
An error is returned if the destination buffer is too small.`

func main() {
	toCompress := []byte(fileContent)
	compressed := make([]byte, len(toCompress))

	//compress
	l, err := lz4.CompressBlock(toCompress, compressed, 0)
	if err != nil {
		panic(err)
	}
	fmt.Println("compressed Data:", string(compressed[:l]))

	//decompress
	decompressed := make([]byte, len(toCompress))
	l, err = lz4.UncompressBlock(compressed[:l], decompressed, 0)
	if err != nil {
		panic(err)
	}
	fmt.Println("\ndecompressed Data:", string(decompressed[:l]))
}

答案2

得分: 4

使用bufio包,您可以在不一次性将整个文件内容读入内存的情况下进行文件的(解)压缩。

实际上,这使您能够(解)压缩大于系统可用内存的文件,这可能与您的特定情况相关或不相关。

如果这对您有帮助,您可以在这里找到一个可行的示例:

package main

import (
	"bufio"
	"io"
	"os"

	"github.com/pierrec/lz4"
)

// 压缩文件,然后再解压缩!
func main() {
	compress("./compress-me.txt", "./compressed.txt")
	decompress("./compressed.txt", "./decompressed.txt")
}

func compress(inputFile, outputFile string) {
	// 打开输入文件
	fin, err := os.Open(inputFile)
	if err != nil {
		panic(err)
	}
	defer func() {
		if err := fin.Close(); err != nil {
			panic(err)
		}
	}()
	// 创建读取缓冲区
	r := bufio.NewReader(fin)

	// 打开输出文件
	fout, err := os.Create(outputFile)
	if err != nil {
		panic(err)
	}
	defer func() {
		if err := fout.Close(); err != nil {
			panic(err)
		}
	}()
	// 创建lz4写入缓冲区
	w := lz4.NewWriter(fout)

	// 创建用于保存读取的块的缓冲区
	buf := make([]byte, 1024)
	for {
		// 读取一个块
		n, err := r.Read(buf)
		if err != nil && err != io.EOF {
			panic(err)
		}
		if n == 0 {
			break
		}

		// 写入一个块
		if _, err := w.Write(buf[:n]); err != nil {
			panic(err)
		}
	}

	if err = w.Flush(); err != nil {
		panic(err)
	}
}

func decompress(inputFile, outputFile string) {
	// 打开输入文件
	fin, err := os.Open(inputFile)
	if err != nil {
		panic(err)
	}
	defer func() {
		if err := fin.Close(); err != nil {
			panic(err)
		}
	}()

	// 创建lz4读取缓冲区
	r := lz4.NewReader(fin)

	// 打开输出文件
	fout, err := os.Create(outputFile)
	if err != nil {
		panic(err)
	}
	defer func() {
		if err := fout.Close(); err != nil {
			panic(err)
		}
	}()

	// 创建写入缓冲区
	w := bufio.NewWriter(fout)

	// 创建用于保存读取的块的缓冲区
	buf := make([]byte, 1024)
	for {
		// 读取一个块
		n, err := r.Read(buf)
		if err != nil && err != io.EOF {
			panic(err)
		}
		if n == 0 {
			break
		}

		// 写入一个块
		if _, err := w.Write(buf[:n]); err != nil {
			panic(err)
		}
	}

	if err = w.Flush(); err != nil {
		panic(err)
	}
}

希望对您有所帮助!

英文:

Using the bufio package you can (de)compress files without slurping their entire contents into your memory all at once.

In effect this allows you to (de)compress files larger than the memory available to the system, which may or may not be relevant to your specific circumstances.

If this is relevant, you can find a working example here:

package main
import (
"bufio"
"io"
"os"
"github.com/pierrec/lz4"
)
// Compress a file, then decompress it again!
func main() {
compress("./compress-me.txt", "./compressed.txt")
decompress("./compressed.txt", "./decompressed.txt")
}
func compress(inputFile, outputFile string) {
// open input file
fin, err := os.Open(inputFile)
if err != nil {
panic(err)
}
defer func() {
if err := fin.Close(); err != nil {
panic(err)
}
}()
// make a read buffer
r := bufio.NewReader(fin)
// open output file
fout, err := os.Create(outputFile)
if err != nil {
panic(err)
}
defer func() {
if err := fout.Close(); err != nil {
panic(err)
}
}()
// make an lz4 write buffer
w := lz4.NewWriter(fout)
// make a buffer to keep chunks that are read
buf := make([]byte, 1024)
for {
// read a chunk
n, err := r.Read(buf)
if err != nil && err != io.EOF {
panic(err)
}
if n == 0 {
break
}
// write a chunk
if _, err := w.Write(buf[:n]); err != nil {
panic(err)
}
}
if err = w.Flush(); err != nil {
panic(err)
}
}
func decompress(inputFile, outputFile string) {
// open input file
fin, err := os.Open(inputFile)
if err != nil {
panic(err)
}
defer func() {
if err := fin.Close(); err != nil {
panic(err)
}
}()
// make an lz4 read buffer
r := lz4.NewReader(fin)
// open output file
fout, err := os.Create(outputFile)
if err != nil {
panic(err)
}
defer func() {
if err := fout.Close(); err != nil {
panic(err)
}
}()
// make a write buffer
w := bufio.NewWriter(fout)
// make a buffer to keep chunks that are read
buf := make([]byte, 1024)
for {
// read a chunk
n, err := r.Read(buf)
if err != nil && err != io.EOF {
panic(err)
}
if n == 0 {
break
}
// write a chunk
if _, err := w.Write(buf[:n]); err != nil {
panic(err)
}
}
if err = w.Flush(); err != nil {
panic(err)
}
}

答案3

得分: 1

我期望的结果是从下面的代码中得到。我从这个文件中得到了这个[https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fpierrec%2Flz4%2Fblob%2Fmaster%2Flz4c%2Fmain.go&sa=D&sntz=1&usg=AFQjCNFIT2O1Grs0vu4Gh8Af96GSBaa9EA]。
文件作为命令行参数输入,并且成功压缩/解压缩。

package main
import (
	// 	"bytes"

	"flag"
	"fmt"
	"io"
	"log"
	"os"
	"path"
	"runtime"
	"strings"

	"github.com/pierrec/lz4"
)

func main() {
	// 处理命令行参数
	var (
		blockMaxSizeDefault = 4 << 20
		flagStdout          = flag.Bool("c", false, "output to stdout")
		flagDecompress      = flag.Bool("d", false, "decompress flag")
		flagBlockMaxSize    = flag.Int("B", blockMaxSizeDefault, "block max size [64Kb,256Kb,1Mb,4Mb]")
		flagBlockDependency = flag.Bool("BD", false, "enable block dependency")
		flagBlockChecksum   = flag.Bool("BX", false, "enable block checksum")
		flagStreamChecksum  = flag.Bool("Sx", false, "disable stream checksum")
		flagHighCompression = flag.Bool("9", false, "enabled high compression")
	)
	flag.Usage = func() {
		fmt.Fprintf(os.Stderr, "Usage:\n\t%s [arg] [input]...\n\tNo input means [de]compress stdin to stdout\n\n", os.Args[0])
		flag.PrintDefaults()
	}
	flag.Parse()
	fmt.Println("output to stdout ", *flagStdout)
	fmt.Println("Decompress", *flagDecompress)
	// 使用所有的CPU
	runtime.GOMAXPROCS(runtime.NumCPU())

	zr := lz4.NewReader(nil)
	zw := lz4.NewWriter(nil)
	zh := lz4.Header{
		BlockDependency: *flagBlockDependency,
		BlockChecksum:   *flagBlockChecksum,
		BlockMaxSize:    *flagBlockMaxSize,
		NoChecksum:      *flagStreamChecksum,
		HighCompression: *flagHighCompression,
	}

	worker := func(in io.Reader, out io.Writer) {
		if *flagDecompress {
			fmt.Println("\n 解压缩数据")
			zr.Reset(in)
			if _, err := io.Copy(out, zr); err != nil {
				log.Fatalf("解压缩输入时出错:%v", err)
			}
		} else {
			zw.Reset(out)
			zw.Header = zh
			if _, err := io.Copy(zw, in); err != nil {
				log.Fatalf("压缩输入时出错:%v", err)
			}
		}
	}

	// 没有输入意味着将[de]compress stdin到stdout
	if len(flag.Args()) == 0 {
		worker(os.Stdin, os.Stdout)
		os.Exit(0)
	}

	// 压缩或解压缩所有输入文件
	for _, inputFileName := range flag.Args() {
		outputFileName := path.Clean(inputFileName)

		if !*flagStdout {
			if *flagDecompress {
				outputFileName = strings.TrimSuffix(outputFileName, lz4.Extension)
				if outputFileName == inputFileName {
					log.Fatalf("无效的输出文件名:与输入文件相同:%s", inputFileName)
				}
			} else {
				outputFileName += lz4.Extension
			}
		}

		inputFile, err := os.Open(inputFileName)
		if err != nil {
			log.Fatalf("打开输入文件时出错:%v", err)
		}

		outputFile := os.Stdout
		if !*flagStdout {
			outputFile, err = os.Create(outputFileName)
			if err != nil {
				log.Fatalf("打开输出文件时出错:%v", err)
			}
		}
		worker(inputFile, outputFile)

		inputFile.Close()
		if !*flagStdout {
			outputFile.Close()
		}
	}
}

示例输入

go run compress.go -9=true sample.txt

英文:

The result What i expected is from the below code. I got this [ https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fpierrec%2Flz4%2Fblob%2Fmaster%2Flz4c%2Fmain.go&sa=D&sntz=1&usg=AFQjCNFIT2O1Grs0vu4Gh8Af96GSBaa9EA ] from this file.
File is given as input in the command line argument and its compressed/Decompressed Successfully.

package main
import (
// 	&quot;bytes&quot;
&quot;flag&quot;
&quot;fmt&quot;
&quot;io&quot;
&quot;log&quot;
&quot;os&quot;
&quot;path&quot;
&quot;runtime&quot;
&quot;strings&quot;
&quot;github.com/pierrec/lz4&quot;
)
func main() {
// Process command line arguments
var (
blockMaxSizeDefault = 4 &lt;&lt; 20
flagStdout          = flag.Bool(&quot;c&quot;, false, &quot;output to stdout&quot;)
flagDecompress      = flag.Bool(&quot;d&quot;, false, &quot;decompress flag&quot;)
flagBlockMaxSize    = flag.Int(&quot;B&quot;, blockMaxSizeDefault, &quot;block max size [64Kb,256Kb,1Mb,4Mb]&quot;)
flagBlockDependency = flag.Bool(&quot;BD&quot;, false, &quot;enable block dependency&quot;)
flagBlockChecksum   = flag.Bool(&quot;BX&quot;, false, &quot;enable block checksum&quot;)
flagStreamChecksum  = flag.Bool(&quot;Sx&quot;, false, &quot;disable stream checksum&quot;)
flagHighCompression = flag.Bool(&quot;9&quot;, false, &quot;enabled high compression&quot;)
)
flag.Usage = func() {
fmt.Fprintf(os.Stderr, &quot;Usage:\n\t%s [arg] [input]...\n\tNo input means [de]compress stdin to stdout\n\n&quot;, os.Args[0])
flag.PrintDefaults()
}
flag.Parse()
fmt.Println(&quot;output to stdout &quot;, *flagStdout)
fmt.Println(&quot;Decompress&quot;, *flagDecompress)
// Use all CPUs
runtime.GOMAXPROCS(runtime.NumCPU())
zr := lz4.NewReader(nil)
zw := lz4.NewWriter(nil)
zh := lz4.Header{
BlockDependency: *flagBlockDependency,
BlockChecksum:   *flagBlockChecksum,
BlockMaxSize:    *flagBlockMaxSize,
NoChecksum:      *flagStreamChecksum,
HighCompression: *flagHighCompression,
}
worker := func(in io.Reader, out io.Writer) {
if *flagDecompress {
fmt.Println(&quot;\n Decompressing the data&quot;)
zr.Reset(in)
if _, err := io.Copy(out, zr); err != nil {
log.Fatalf(&quot;Error while decompressing input: %v&quot;, err)
}
} else {
zw.Reset(out)
zw.Header = zh
if _, err := io.Copy(zw, in); err != nil {
log.Fatalf(&quot;Error while compressing input: %v&quot;, err)
}
}
}
// No input means [de]compress stdin to stdout
if len(flag.Args()) == 0 {
worker(os.Stdin, os.Stdout)
os.Exit(0)
}
// Compress or decompress all input files
for _, inputFileName := range flag.Args() {
outputFileName := path.Clean(inputFileName)
if !*flagStdout {
if *flagDecompress {
outputFileName = strings.TrimSuffix(outputFileName, lz4.Extension)
if outputFileName == inputFileName {
log.Fatalf(&quot;Invalid output file name: same as input: %s&quot;, inputFileName)
}
} else {
outputFileName += lz4.Extension
}
}
inputFile, err := os.Open(inputFileName)
if err != nil {
log.Fatalf(&quot;Error while opening input: %v&quot;, err)
}
outputFile := os.Stdout
if !*flagStdout {
outputFile, err = os.Create(outputFileName)
if err != nil {
log.Fatalf(&quot;Error while opening output: %v&quot;, err)
}
}
worker(inputFile, outputFile)
inputFile.Close()
if !*flagStdout {
outputFile.Close()
}
}
}

Sample Input

go run compress.go -9=true sample.txt

答案4

得分: 0

我对Go语言也是新手,使用github.com/pierrec/lz4库时遇到了一些困难。

我误解的是,在NewWriter上调用Close()必须的,如果不这样做,将导致错误的结果。(我花了很多时间在这个问题上,因为我以为这是可选的,只是一种最佳实践,就像关闭文件处理程序、网络连接等一样)

我编写了两个压缩/解压的包装版本。

首先,是一个通用的读取器/写入器方法(类似于README上的示例,但没有使用管道)[playground]:

func compress(r io.Reader, w io.Writer) error {
    zw := lz4.NewWriter(w)
    _, err := io.Copy(zw, r)
    if err != nil {
        return err
    }
    // 关闭非常重要
    return zw.Close()
}

func decompress(r io.Reader, w io.Writer) error {
    zr := lz4.NewReader(r)
    _, err := io.Copy(w, zr)
    return err
}

如果你的数据大小较小,并且不需要/不想处理缓冲区,只想以更"函数式"的方式将未压缩的字节转换为压缩的字节(在更方便的情况下),可以使用第二个版本[playground]:

func compress(in []byte) ([]byte, error) {
    r := bytes.NewReader(in)
    w := &bytes.Buffer{}
    zw := lz4.NewWriter(w)
    _, err := io.Copy(zw, r)
    if err != nil {
        return nil, err
    }
    // 关闭非常重要
    if err := zw.Close(); err != nil {
        return nil, err
    }
    return w.Bytes(), nil
}

func decompress(in []byte) ([]byte, error) {
    r := bytes.NewReader(in)
    w := &bytes.Buffer{}
    zr := lz4.NewReader(r)
    _, err := io.Copy(w, zr)
    if err != nil {
        return nil, err
    }
    return w.Bytes(), nil
}
英文:

I am also new to Go and have struggled a bit using github.com/pierrec/lz4.

What I was misunderstanding is that calling Close() on NewWriter is not optional and failing to do so will lead to incorrect results. (I spent a lot of time banging my head against the wall for thinking this was optional and just a best-practice, as it is in closing file handlers, network connections, etc)

I wrote two wrapper versions for compressing/decompressing.

First, a generic reader/writer approach (similar to the example on the README, but without pipes) [playground]:

func compress(r io.Reader, w io.Writer) error {
	zw := lz4.NewWriter(w)
	_, err := io.Copy(zw, r)
	if err != nil {
		return err
	}
    // Closing is *very* important
	return zw.Close()
}

func decompress(r io.Reader, w io.Writer) error {
	zr := lz4.NewReader(r)
	_, err := io.Copy(w, zr)
	return err
}

If your data size is small and you don't need to/want to mess with buffers and just want to have uncompressed bytes in, compressed bytes out, (in a more "functional" fashion) this second version may be more convenient [playground]:

func compress(in []byte) ([]byte, error) {
	r := bytes.NewReader(in)
	w := &amp;bytes.Buffer{}
	zw := lz4.NewWriter(w)
	_, err := io.Copy(zw, r)
	if err != nil {
		return nil, err
	}
    // Closing is *very* important
	if err := zw.Close(); err != nil {
		return nil, err
	}
	return w.Bytes(), nil
}

func decompress(in []byte) ([]byte, error) {
	r := bytes.NewReader(in)
	w := &amp;bytes.Buffer{}
	zr := lz4.NewReader(r)
	_, err := io.Copy(w, zr)
	if err != nil {
		return nil, err
	}
	return w.Bytes(), nil
}

huangapple
  • 本文由 发表于 2016年1月25日 19:31:34
  • 转载请务必保留本文链接:https://go.coder-hub.com/34991544.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定