How I Boosted My WPA2 Cracking Speed by >300% using Hashcat Without any Hardware Upgrade

Using the simple techniques used in this post you cans save your Hashcat/WPA2 cracking time/resources/bills exponentially.​TL;DR​This complete topic can be summed up into the following one-liner​cat wpa.txt | awk ‘length >=8 && length new-wordlist.txt​You can use this one-liner to save the time for cracking WPA2 Passphrases in your red-teaming operations or maybe just for fun hacking your neighbours and ignore rest of the post.​But, if you are interested in knowing the background details then keep reading…​The command above simply displays the content of a file named wpa.txt in the current working directory.The output is then passed to uniq command to extract out duplicates (keep only unique values). Which is further passed to awk command which then limits the string (passphrase/word) length between 8 to 20 characters.All the calculated passphrase thrown to STDOUT are then saved in a new file named new-wordlist.txtKnow that WiFi with WPA2-PSK encryption scheme has a password length ranging from a minimum of 8 characters to 63 characters.​The one-liner above uses 20 characters as upper limit because I think:​Nobody would keep a WiFi password that longEven with string length longer than that, chances of cracking with wordlist are very thin.You may change that according to your wish. Just keep the numbers within 8-63 and you are good to go.​———————————————————————————————One point of argument may arise here:​- I use Hashcat with my superfast cracking rig, it simply rejects the words that doesn’t lie within the WPA2 range. Why would I care about optimising my wordlist?​Well, data doesn’t simply supports this argument. I performed few tests to see if it actually makes a difference for us in exhausting the complete wordlist with hashcat/non-optimised-wordlist and with hashcat/optimised-wordlist.​Here are my results:​File name: Words.txtSize: 234 MBOptimised wordlist size: 48 MB​Time taken to exhaust whole wordlistWords.txt: real 7m34.896snew-wordlist.txt: real 2m2.207s​For comparison, Watch the following video where I exhaust the rockyou.txt wordlist on ym MacBook Pro 2018 (16 GB RAM) with Radeon Pro 560 (4 GB Graphics Memory):https://youtu.be/uYJsyg0vgPo​That’s a 3.3x of saved time. Even if time consumed to generate the new-wordlist.txt is considered, still we are more than 3.4x of saved time ahead of usual dictionary.​time cat Words.txt | uniq | awk ‘length >=8 && length wordsreal 0m6.161suser 0m10.713ssys 0m0.333s​From this we understand that the hashcat does reject the incorrect word length but maybe after actually trying operation. on it. 4096 iterations of SHA-256, maybe? I don’t know.But one thing is sure, if we craft our wordlist to length strictly within the acceptable range, we can reduce our cracking time by multiple folds.​Another question arises is that​- I have used 20 as the upper limits whereas the original file may contain more than 20 characters of passphrases that may’ve consumed time.​I also did a test with optimised wordlist ranging from length 8 to 68.​new-wordlist file size: 53 MBTotal time to exhaust the wordlist by hashcat: real 2m7.833s​So, not a significant change there. We are still worlds apart from the original file’s exhaustion time.​Moving on..​Till now we know that optimised wordlist is far better than just throwing away any random wordlist for your WPA2 cracking operations.It only takes a few seconds to regenerate optimised wordlist out of a raw one. On my i7 MacBook pro it takes 6 seconds for every ~250 MB of text file.———————————————————————————————​Reducing time to craft new-wordlist.txt​NOTE: The following section doesn’t make a dent in your WPA2 cracking time, but if you catch the essence of it, your life will be much better in terms of efficiency and lesser production hassle will come to you as a reward by your own very nature.​You might have a wordlist in Gigabytes, maybe 10+ GBs. Or after reading this post you might want to bind all your existing wordlist (100+ GB maybe) into a single wpa-list.txt.​It will take some time to be optimised according to our method.​Knowing that the linux shell commands are single threaded process and we need more optimised methods to save more of our valuable time.​Using shell commands I know of 2 methods to get our job done.​Using awk commandGrep commandThere are other ways as well, but I am leaving them upto you to explore and share back.​Let’s take Words.txt for example. for 8 to 20 characters of range.​Commands used:​time cat Words.txt | grep -xE ‘.{8,20}’ | uniq > wgureal 0m23.814suser 0m26.856ssys 0m0.720stime cat Words.txt | awk ‘length >=8 && length wordlist-awkreal 0m18.207suser 0m21.419ssys 0m0.384s​​using awk it takes 18 seconds whereas using grep it takes 23 seconds. The difference only increases with filesize.Which makes it an obvious choice to use awk in our operations.​Although current filesize for Words.txt (raw wordlist) was ~250 MB. If we multiply it by 100 or say our raw wordlist size is 25 GB. it will take 2300 seconds for grep and 1800 seconds for awk to finish up the job.​an extra 8 minutes!For approximately the same output.​Optimised one-liner​By now you must be be in favour of using awk for your wordlist optimisation script. After all it is saving us some extra time.​What if I told you you can further reduce the time by 1/3rd by just rearranging our one-liner?​Ideally it doesn;t come to our mind to reorganise one-liners and see if makes any legitimate difference for us. But here we are. looking for further saving our resources, time, efforts, stress and most importantly building a coscious habit of working efficiently :)​In the beginning I gave the TL;DR command: cat Words.txt | awk ‘length >=8 && length new-wordlist.txt​I saved the best advice for the most interested ones. Since you’ve made up to this point of this post I welcome you to see how you can further reduce the wordlist generation time by 3 folds.​One mandatory thing is this one-liner is to cat the file contents to STDOUT. That is the first and foremost thing to do, so no re-arrangement we can perform there.Moving on, we perform operations on the output in 2 parts​Analyse complete file and remove duplicates.Remove strings longer than 8 and less than 20 charactersBy simply putting awk command after uniq, I was able to reduce the wordlist generation time by 3 folds. here are my results:Words.txt filesize: ~250 MBresulting wordlist(s) size: Exactly same, 48 MBtime cat Words.txt | awk ‘length >=8 && length waureal 0m18.294suser 0m21.525ssys 0m0.389stime cat Words.txt | uniq | awk ‘length >=8 && length wuareal 0m6.292suser 0m10.981ssys 0m0.380s​That’s a 3x of more time when used uniq afterwards. Compare this with 25 GB of combined wordlist example.​Using awk before uniq would’ve taken 1800 seconds or 30 minutes. whereas uniq before awk would’ve taken only 10 minutes.​20 minutes worth of time, electricity saved just by simple rearrangement. Huge difference, right?​Note that we didn’t use any GPU for this operation. Opencl/CUDA can make this process much much faster but that’s not something I want to dive into at the moment.​Using the exact same CPU, exact same file with exact same output size. We achieved 300% of performance boost by simply rearranging our commands.​There are other ways to make this more efficient. I would like to have your views on it. Share your script with results.​Happy Hacking- Hardeep Singh (@rootsh3ll) via /r/hacking http://bit.ly/2BigWes