Read that article, seems to me that he made is test to validate his confirmation bias.
Lets see, when you compare RAM and swap; disk access is (say roughly) 500 times slower. so a cache miss and swap-in would be 500 times lower and even swapping out an unused page would create an io which is very very slow.
in older days when you had 640kb RAM then having a swap which is roughly twice the size of RAM would give you an illusion of 3times the memory. but but a very slow memory. and when you wanted huge chunks of data then computer will give you the same error message as when it didn't have the swap. but in today's age, we have boat loads of ram and ram is relatively cheap. so just add more RAM which is way way faster than swap disk.
Now, coming to ramdisk based swap; swap in and swap outs will only add one more memory transfer where you move pages from here to there in the memory. how is that supposed to improve performance? I mean swapping is creating unnecessary movement of pages in ram. Actually if you move pages across numa nodes, it will be much more costlier operation than just copying memory in the same numa node. I mean like let the page live where it is instead of moving it in the name of swapping and add more dma calls...
Personally anecdote time... I have done extensive io tests on enterprise servers for a very long time like saturating network links etc in SAN environments. never used swap on my servers/virtual machines. I dont think swap is a good thing in today's world where you could easily slap a 32G extra ram. if an application using more memory then probably it is a badly written application.
Anyone looked into xfs? its a fairly advanced filesystem too...