I should've chimed in bit earlier.
NVMe testing is very tricky. to explain a bit more, the drive directly sits on pcie bus directly as compared to traditional sata drives. which means that there is a direct highway between the drive, memory, dma controller and cpu. there is no translation between sata protocol etc... to add to this nvme is a very simple protocol optimized for speed. has too many completion queues compared to sata. because of all these reasons, they are very very fast.
now coming to perf numbers; the drive manufacturing companies always want to show off; so they post the best numbers. so they post numbers for the raw drive performance. to explain this, writing a file on operating system is a stack of protocol conversions; in windows example, right click -> paste -> ntfs read+write (ntfs implements its own caching) -> block device (has its own caching) -> scsi protocol -> sata protocol -> actual drive in case of sata drive. in case of nvme few layers are removed.
The drive companies also want to show off speeds in MBps instead of IOPS because if you have 1MB block size set, you get higher MBps for same IOPS. And then they avoid all the file system, block layer and other overheads and do their tests with direct devices. For this reason, we don't use simple linux commands like dd for testing NVMe but instead use fio.
In your case you are simply doing a file copy paste and expecting it to match crystal mark/fio. it wont work like that.
TLDR; too many asterisks with drive performance stats. use fio if you want to test nvme drives. probably on a linux box with raw drives. (linux because, everyone does it like that and windows is stupid anyways.)