Sunday 30 March 2014

Week 11: Sorting and Efficiency

So, over the past few weeks, we've been learnoing about a few different types of sorting methods and their efficiency. We focused on three different methods in particular:
1) Selection Sort
2) Quick Sort
3) Merge Sort 

Selection Sort
The algorithm of the selection sort is pretty simple and straightforward compared to other sorting algorithms. The first thing that happens is that it splits the list of elements into two; sorted and unsorted. The sorted goes to the right side and unsorted comes to the left. The goal of each iteration of the algorithm is to place the smallest element in the collection (of the unsorted part) to its correct position. In other words, the algorithm compares the first element in the unsorted part to the remaining elements (in the unsorted part as well) and keeps track of the smallest element found. Once there are no elements to check, the first element is swapped with the smallest element found (if necessary). This same operation is performed n times, where n is the number of elements in the collection. After this operation is performed for every position in the collection, the given collection becomes sorted.

Quick Sort
This sorting algorithm uses a different approach. It first chooses one element from the collection that should be sorted, so called pivot. Different quick sort implementations choose the pivot differently. Some take the first element, some take the last and some choose the pivot randomly. Once the pivot is chosen, it's placed in its correct position in the collection. This is done by putting all the elements smaller than the pivot to the left of the pivot and putting all the elements larger than the pivot to the right of the pivot. After this, the pivot may occur in any position in the collection. It may became the first element (if it's the smallest), the last element (if it's the largest) or may be placed somewhere in between other elements. The key thing is that the algorithms guarantees that the pivot is placed in its final position in the sorted collection. Once the pivot is placed, the same technique is applied (in a recursive way) to the other two (or less) unsorted parts of the collection until the collection is sorted.

Merge Sort
The key point of the merge sort is that this algorithm can easily make a sorted collection given two other sorted collections. It does so by comparing the smallest elements of each collection (if such exist) and placing the smaller one into a new collection until one of the collections becomes empty. Once one of the collections becomes empty, the remaining elements of the other collection are simply placed into the new collection. But this is just part of the merge sort algorithm. When we are given an unsorted collection of items that we want to sort, we don't have a pair of sorted collections to merge. So the other part of the merge sort creates such a condition. Since a collection of a single element is always sorted, the algorithm breaks down the given unsorted list into pieces/smaller collections of a single element. Then it merges them using the algorithm described above until the collection is sorted.

Efficiency
Efficiency is a very important aspect of programming to consider when developing an algorithm to solve a particular problem. An inefficient solution to a problem will do more work (calculations, memorization and so on) than is required. Or in other words such a solution will use more resources than it should, such as memory space and processing time. To conclude the idea of efficiency we can say that whenever we have two algorithms that solve the same problem we always should prefer the one that is more efficient even if it's longer and more complex.
The efficiency of any algorithm can be expressed in terms of the size of the input (usually denoted by n). Even thought we can express the precise growth function of an algorithm, in most cases we are only interested in the dominant term or the Big-O value which provides an accurate estimate for a large input size.
If n is the size of the collection to be sorted, then the following are the efficiencies of the sorting algorithms described above:
Selection sort: O(n2) in any case.
Quick sort: nlog(n) in the best and average case and O(n2) in the worst case.
Merge sort: nlog(n) in any case.
To see a graph on efficiency, check http://csc148h1.blogspot.ca/2014/03/week-9-run-time-analysis.html

So if we had to choose one of the given three sorting algorithms to sort large collections of items, then we should choose the merge sort, because it is the most efficient one in every situation.

So anyways, this is my last blog for this course that I will be writing. Every part of this course was an amazing experience and I hope that you had fun reading my blog. Thank you and good night!


Sunday 23 March 2014

Week #10: Trees with Linked Lists

This week in class we did a couple of different types of sorts and their runtimes but since next weeks slog will be dedicated to writing about sorting and efficiency, I think I'll just talk about something else.

This weeks lab was the hardest lab we had. Nobody in the class figured out the solution. Lucky for us, our super TA, Amirali, knew exactly how to do the coding. He explained it step by step and showed us what he was doing and why. I didn't really understand it at first but once I started looking over the code, it started to make sense. The first part which we had to do was to do an inorder traversal of a linked list. It's not as easy as it sounds. The reason was that we would have to have some sort of placeholder which we could use to refer to the front of the list and to the back of the list. We hadn't used this concept before so it took me a while to understand it.

Other than that, we have our second test this week. I hope I do well on it. I did pretty good on my first test and did very well on the first assignment, so I hope I can keep up the good marks. I love programming and I hope to learn a lot more in the coming years. This week I don't have much to write about but I hope you enjoyed it. Until next time.

Sunday 16 March 2014

Week #9: Run-Time Analysis

So this week was a pretty straight-forward week. We were talking about run time for a program. The different speeds that a program can run at. We learned a few different types of run time. They are as follows:

1. Constant - The amount of code executed is the same no matter what.
2. Logarithmic - The amount of code executed grow logarithmically depending on the input.
3. Linear -  The amount of code executed grows linearly.
4. Quadratic - The amount of code executed grows quadratically.
5. Cubic - The amount of code executed grows cubicly.
6. Exponential - The amount of code executed grows exponentially.
7. Horrible - The amount of code executed grows at a greater rate then the ones mentioned.
You can see how the difference in input makes the run time grow for the different code 

The order of the different run times written above are in order from lowest to highest. It's obvious that the more code that is executed, the longer the run time will be. And for all the codes, a very small input will not make much of a difference in the run times. But, if you're calculating something which takes more input then the difference could be significantly large!

Depending on how the run time grows, we will say that the run time grows O(insert run time growth pattern here) behavior. We wouldn't say O(Constant) behavior but would mention the constant value or the linear value. For example O(1) or O(n). Anyways other than that, the class this week was pretty straightforward. Until next time.

Sunday 9 March 2014

Week #8: LinkedList

Something we started last week and we carried on with this week was LinkedList (LL). LL is very similar to Trees but a lot more difficult to traverse if not initialized properly. The difference between LLs and Trees are that Trees have any amount of children which then in turn have more children or none at all. LLs on the other hand have only one child and that child in a LL which has another child. So in Trees, different routes take you to different nodes but in LLs, there's only one route. One advantage a LL has is that it allows efficient insertion or removal of elements from any position without reallocation of the structure. This is only if you've made the LL in a certain way, which we learned this week.

To be able to traverse the LL properly, you need to initiate a namespace which allows you to keep track of the front of the list as more nodes are added on and also you need to keep track of the length of the list. This will help when you need to find a specific area of the LL so that you can manipulate that section however you need to.

 After that we learnt something which was pretty useful. That was that if you want to use write a method for a special function which doesn't allow any parameters (such as __str__) then you can write a separate function which does what you need it to do to represent the info in the manner of a string. Then, once you've written that, you'll be able to call upon that function inside the __str__ function. Even though this idea is quite simple, it never crossed my mind and will prove to be useful in the future.

This week in lab we had to do some more work with LLs. It was pretty straightforward and my partner and I finished in about 40 minutes. We just sat there for the next one and a half hours finishing off extras from the lab handout. After that we started talking about the A2 part 1 and I helped him realize that the code he was working on was for part 2 and part 1 was much simpler than he had anticipated. He was irritated that he had misunderstood the assignment part 1 but was optimistic at the fact that he wouldn't need to worry too much about part 2.

So that's a little insight on LLs. Until next time.

Sunday 2 March 2014

Week #7: Recursion

"Recursion is truly one of the most beautiful and elegant tools in computer science." 
Perfect example of recursion in
everyday life

I couldn't agree with the above statement any more. I spoke about recursion briefly in my Week #4 post but since that time, I have grown to learn and understand recursion more than ever. What is recursion? Recursion is a method of solving problems which have similar problems lodged within them. It is used (in python) by creating a function which calls itself repeatedly until the problem within the problem is solved and then that answer is used to solve a bigger problem. This is very helpful in solving complex algorithms. It's difficult to understand but once you start using it, you tend to get the hang of it.  

Before when computer hardware was still at its early stages, recursion cost a lot of money. But now, due to the advancement in hardware technology, recursive is cos efficient and saves time from working out and maintaining the algorithm by hand. Also, it is free from error (if the code is written well enough).

There are three main things that need to be known in building a recursive function. 

  1. The recursive function must have a base case. - The reason for this is that the function must terminate somewhere. If there is no base case the recursive function will keep going and will throw out an error at you. 
  2. A recursive function must change itself and work towards its base case. - This is quite obvious because if the function were to call itself using the same arguments, the function would just end in an infinite loop which, once again, will throw an error at you.
  3. The most obvious one is that the recursive must call itself, recursively. Without this, there is no recursive function.
Let's take an example of a simple function to use which will need recursion to work. Let's say we have a list of lists and we want to add all the numbers within them.
List = [0,1,2,[0,1,3],1]
First thing we would need to determine is the base case. So we know that we want to go through each item. For that we would use a for loop. Now we know that if the item we are getting is a number, we want to add that. But if it's a list, we need to go through that and add the numbers within them. So it would look something like this:

def add_num(num_list):
              total = 0
              for item in num_list:
                          if isinstance(item, list):                    #This will tell us if we need to go 
                                                                                     #through the function again. If we  
                                                                                     #do, we will need to store it 
                                      temp = add_num(item)      #somewhere and add that to the
                                      total += temp                     #total.                                        
                          else:
                                 total += item                            # This will count as the base case
              return total

So that's basically recursion. If you follow the above steps, you'll usually end up with a correct function. Until next time.


Sunday 23 February 2014

Week #6: Trees

This week we learned about a different type of stacks. Theses were called trees. They were made into a separate class of their own and we were taught the different terminology of the Tree class. Each value was called a node. Each node could only have one parent meaning that it was only part of one Tree class and not multiple. The node at the top was known as the root and there was only one unique path from the root to each node. There were no loops.


Within the nodes there were terminologies to make nodes specific. They are as follows:

  1. Leaf - A node with no children.
  2. Internal Node - A node with one or more children. (It's possible for a node to have more than one children but not more than one parent.)
  3. Subtree - A tree formed by any tree node together with its descendants and the edges leading to them.
There were also two other terms:
  1. Height - The maximum path length in a tree. Each node is counted as a height of one. So, the maximum number of nodes from the root to the final node would be the height.
  2. Arity, Branching Factor - The maximum number of children for any node.
I have absolutely no idea how this is going to help us as of yet but I've taken a look at assignment 2 and it's based on designing a class which is similar to this. We'll see how that goes.

Also last week in the labs, we had to do some work which made us think outside the box. We had to use loop to recreate some built in python functions. The lab was pretty straightforward and not too challenging. 

We've got the midterm this week and just received an email saying that we're allowed to bring one cheat sheet with stuff written on both sides. I'm not quite sure what to write on it but I've got two days to figure it out. Anyways I hoped you enjoyed reading this and that the explanation of the Tree class was clear (to be honest, I actually was pretty lost until I wrote this. Now I've got some sort of clue to what's going on.) Until next time.

Sunday 9 February 2014

Week #5: Scopes and Namespaces

This week we started with the Towers of Hanoi recursion algorithm. The one which was shown in class was the one used for the 3-stool problem. I had already figured out how to write that one and had implemented it. The result I got was not the optimal solution but thought it could work as I couldn't get the recursion function for the optimal i to work. Luckily after much research, I found a different way to find the i. After that, I worked on a solution to solve the 4-stool algorithm. So I finally completed the assignment with the optimal solution. All I've got left to do is to just go through my code and clean up the docstring and the code by making sure that all the names are coincided and not repeated.

The other things the professor talked about was quite simple. They were things which I had come across while I was writing code in CSC108 and came across many errors. These errors helped me understand how the naming worked.I didn't know there would be a whole lecture dedicated to how it worked. When the professor was showing us the example, I got it straightaway due to experience. For more information on namespaces and scopes check http://docs.python.org/3.3/tutorial/classes.html#python-scopes-and-namespaces.

This week in lab we did some more practice on recursion. We just traced the recursion on paper and then wrote a bit of code which was a bit challenging. But my partner and I got through it with a bit of help from our TA. Other than that this week was a pretty slow week. Not too interesting. Hoped you enjoyed reading this. Until next time.