In the project I am working on, I need to do a shuffling of an array of size over 2 millions. At first I thought it was rather easy, and quickly wrote a code myself based on the simplest logic: extract a random element of the array each time and store it in a new array, and repeat this process as many times as the size of the array. However, when running the code, I find that it is extremely time-consuming for a large array. Simulations indicate that the time required goes up quickly as the size of the array.
I have to find another solution. After reading related materials, I tried the so-called Fisher-Yates algorithm. To my wonderful surprise, the algorithm saves soooooo much time and the running time increases basically linearly with the array size, as shown in the following comparison (in unit of second):
Array size######My naive algorithm######Fisher-Yates algorithm
1,000############## 1 ################## 0
5,000############# 22 ################## 0
10,000 ############ 90 ################## 0.1
20,000 ########### 396 ################## 0.2
2,500,000 ######### Inf ################## 34
This is the first time that algorithm demonstrates its supreme power to me, and I am overwhelmed.
Tuesday, April 15, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
That's why I always found data structure and algorithm a very fun subject :)
Post a Comment