2013年11月26日星期二

csc148_slog_property_and_reduce

    In week 11 of csc148, two powerful functions of python are introduced in lectures, which are property() and reduce().
    Function property() takes 4 parameters(fget, fset, fdel, doc). fget, fset, fdel are three methods in a class, they are used to return the value, set the value, del the value. We can add some restrictions to these methods.

def set_symbol(self: 'RegexTreeNode', s: str) -> None:
"""set private symbol"""
    if not s in '01e|.*':
        raise Exception('Invalid symbol: {}'.format(s))
    else:
        self._symbol = s

then, if we add symbol = property(get_symbol, set_symbol, None, None) in the class

>>> a = RegexTreeNode('1',[])
>>> a.symbol = 'a'
it will raise exception

    Another functionality of this function is to prevent the value from being set or deleted by users.(i.e. read-only).  If we add symbol = property(get_symbol, None, None, None) in the class, we can no longer set the value of object.symbol.

   Above all, by indicating a = property() can create a new attribute "a" of the class, and we can set some restrictions on the getting, setting, deleting of that attribute.



    The second powerful function is reduce(). It takes three arguments(function, iterable, initializer). For example, reduce(lambda x, y: x + y, [1,2,3,4,5], 0) calculates 1+2+3+4+5 = 15. The first argument must be a function which takes two arguments. The third argument is the initial value, without which there will be an error if the list is empty list. The reduce() function makes it simple to apply a function on an iterable object over and over again and finally get reduce the iterable object into one value.


2013年11月8日星期五

csc148_slog_sorting_algorithm_and_linked_traversal

    In the week 9 of csc148, we learned some sorting algorithm and their time complexity in the lecture, and we learned how to traverse the binary tree into linked list in the lab.
    The first sorting algorithm is selecting sort. The basic idea is to select the max(min) element and swap it with the first element. And do it for the sublist(list[i:]) over and over again. The time complexity is O(n^2), because it needs 1+2+3+...n steps to finish.
    The second sorting algorithm is quick sort. The basic idea is to quick sort(recursion here!) the set of elements which are less than L[0], quick sort(recursion) the set of elements which are greater than L[0], then return the first stuff + L[0] + second stuff. The worst case time complexity is O(n^2) when L[0] is always the smallest(greatest) element for every recursion. The best case time complexity is O(n*logn) because we split all the elements into halves and for every split we need n operations to get them together.
    The third sorting algorithm is merge sort, which seems more complicated. I found a picture in Wiki which I think perfectly explain how merge sort works. The basic idea of  the merge sort is to merge(We have to write a helper function about merge) the two halves which is merge sorted(recursion here!). The time complexity is O(n*logn) even if it is the worse case.
 
    The fourth sorting algorithm is counting sort. The time complexity is O(n+k), k is the greatest element in the list, which seems perfect. The basic idea is to create another list2 and use the index of list2 to indicate the number. For example, if the list to sort is [1,2,3,5,1], then we create list2, which is [0,2,1,1,0,1]. list2[1] = 2, indicating that there are two '1's. list2[5] = 1, indicating that there is a '5'. Then we have to create a sorted list by traversing list2 (list2[0] = 0, do nothing, list2[1] = 2, append two '1', list2[2] = 1, append a '2'......).
    Currently, I can get time complexity of some algorithm intuitively but I can't prove the time complexity rigorously. Maybe I will learn how to deal with it in 200-level and 300-level csc courses.
    In the lab, we learned how to traverse a binary tree and return two linked list nodes(In the past, we have learned how to return a normal python list). At first, I am very confused about why we have to return two linked list nodes(first node of the traversal, last node of the traversal) rather than one linked list node and I just have no idea to construct this recursive function. After searching in Google, I find that the perfect way is to call recursion on the left sub tree and right sub tree, link the last node of the left sub tree to the root, link root to the first node of the right sub tree, then all nodes are linked!!
 



2013年11月4日星期一

csc148_slog_binary_search_tree

    In week 8, we learned something about binary search tree and the implementation of some methods of BST such as insert,delete(in lecture), count_less(in lab), __contains, max_node(in exercise).
    The basic idea to implement insert and delete is to use recursion. The algorithm of delete is comparably more difficult because when we delete a node(when data == node.data), we have to make it replaced by another node which doesn't break the rule of BST. Here we want want to find the max node of left sub-node(or min node of right sub-node) and use it to replace the deleted node.
    Another challenging topic in this week is the conversion from recursive algorithm to iterative algorithm. Actually, we can find the corresponding iterative algorithm for every recursive algorithm if we have pondered the essence of the recursion. Here I take Fibonacci sequence as example: the basic idea of the recursive algorithm is showed in the picture below.
 
 












   
    Here we can code in iteration but using the idea of recursion.(We can see that recursion is just like popping a element from a stack, make some calculation, and put back the recursion function into the stack, and do it over and over again until the stack is empty)


def function(x):
    c = 0
    stack = [x]
    while stack:
        p = stack.pop()
        if p > 1:
            stack.append(p - 1)
            stack.append(p - 2)
        elif p == 1:
            c = c + 1      
    return c

2013年10月27日星期日

csc148_slog_linked_structure

    In the week 7, we learned linked structure in the lecture and practiced it in the lab and exercise.
    The first linked structure we learned is linked list. It is one of the cases of the linked structure, which has reference to only one object.
    Why do we sometimes use linked list rather than the normal list? I find the answer in the lab, in which we implement linked queue. In the timecheck of the linked queue, I find the time is a constant no matter how many numbers are in the linked queue, i.e. the time complexity is O(1), while the time complexity of the queue based on normal list, is O(n). From this case, we can see that linked structure shows the superiority in the operations such as insert.
    I would say something more about the difference between the implementation of linked queue and linked stack. In the linked queue, we have to keep track of both the top and the bottom but in linked stack we only keep track of the top object. The cause of this difference is that we dequeue on the top(by top, I mean the start node) but enqueue on the bottom(by bottom, I mean the end node). I don't think we can dequeue on the bottom and enqueue on the top, because it's always easier to find the children of a node rather than the parents of that node.



    Then we did something with the binary tree with linked nodes. The concept is the same to the binary tree based on python list, except that we need to use reference to the node rather than use the subscript of a python list.

2013年10月19日星期六

csc148_slog_recursion

    In week 4 and week 5 of CSC148, we learned something about recursion and recursive data type. It is a challenging topic for me at the beginning that I need a lot of time to figure out how recursive functions work.
    Recursion is really useful to computer scientists because sometimes we can solve the problems easily by using the function itself.
    Tower of Hanoi is a typical example. How to solve the Tower of Hanoi with 5 cheeses? It's just move the first 4 cheeses to the intermediate stool, then the fifth one to the destination stool, then the first 4 cheeses to the destination stool. We don't care how to move 4 cheeses because as long as we indicate how to move 1 cheese to the destination stool, the program will run recursively to the bottom case: move 1 cheese.
    I think the key to writing recursive function is:
        1.Find what we should do for the bottom case.
        2.Find the relationship between 'n case' and 'n-1 case' (return what? pass what argument to the 'n-1 case'?)
   The difficulty of witing recursive function is to work out a recursive algorithm, but once we work it out, the implemention will be much more easier.

csc148_slog_oop

    In the first three weeks of the CSC148, we learned the basic ideas of the object-oriented-programming. Object-oriented programming is a programming paradigm which focuses on classes and objects rather than constructing programs in a bunch of functions and data. Class will have attributes(data) and methods(functions) inside it and object is the instance of class.
    Object-oriented programming is the mainstream of programming currently. I think one of the reason is that the prototype of OOP is from the real world. In real world, all the things can fall into many categories, and it's just the idea of class. In real world, all the things have both attributes and functionality(e.g. dogs have 4 feet, dogs can bark), that is the idea of attributes and methods.
    Another important concept of OOP is inheritence. It comes from the real world too.Typically, in the taxonomy of animals, we human not only has the attributes of primate, but also has the attributes of mammal. By saying primate is a subclass of mammal, we will certainly know that human has the attributes of mammal without indicating them. In OOP, inheritence will make the code more concise and reduce the duplicate.
    Above all, OOP can enhance the readability of the programs and makes it easier to construct programs. That's why it is useful for computer scientists.