mirror of
https://github.com/cubixle/csv-vs-parquet.git
synced 2026-04-24 14:44:41 +01:00
add codeblock formatting
This commit is contained in:
@@ -8,6 +8,7 @@ The dataset will be N randomly generated entires with 3 columns.
|
||||
|
||||
Using the Hyperfine benchmarking tool, I ran the built binary for the CSV file and the Parquet file and as you can see from the results below the parquet is much faster writing 100,000 rows.
|
||||
|
||||
```sh
|
||||
$ ./tests.sh
|
||||
Benchmark 1: ./app -type csv -amount 100000
|
||||
Time (mean ± σ): 244.2 ms ± 8.7 ms [User: 22.7 ms, System: 197.6 ms]
|
||||
@@ -16,3 +17,4 @@ Benchmark 1: ./app -type csv -amount 100000
|
||||
Benchmark 1: ./app -type parquet -amount 100000
|
||||
Time (mean ± σ): 38.3 ms ± 3.8 ms [User: 62.0 ms, System: 9.9 ms]
|
||||
Range (min … max): 26.4 ms … 44.4 ms 69 runs
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user