HomeHome  PortalPortal  CalendarCalendar  FAQFAQ  SearchSearch  MemberlistMemberlist  UsergroupsUsergroups  RegisterRegister  Log inLog in  


 Assignment 7 (Due: September 14, 2009, 13:00hrs)

Go down 


Posts : 182
Points : 447
Join date : 2009-06-18

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptySun Aug 09, 2009 9:46 pm

Read three published scientific papers (of varying quality) and write a short report for each of them.
Back to top Go down
View user profile
hannah rhea hernandez

hannah rhea hernandez

Posts : 27
Points : 35
Join date : 2009-06-19
Age : 30
Location : Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyMon Sep 07, 2009 8:27 pm

AMOLED (active matrix OLED) functionality and usable lifetime at temperature
Author(s): David A. Fellowes; Michael V. Wood; Olivier Prache; Susan Jones

Paper Abstract
Active Matrix Organic Light Emitting Diode (AMOLED) displays are known to exhibit high levels of performance, and these levels of performance have continually been improved over time with new materials and electronics design. eMagin Corporation developed a manually adjustable temperature compensation circuit with brightness control to allow for excellent performance over a wide temperature range. Night Vision and Electronic Sensors Directorate (US Army) tested the performance and survivability of a number of AMOLED displays in a temperature chamber over a range from -55°C to +85°C. Although device performance of AMOLEDs has always been its strong suit, the issue of usable display lifetimes for military applications continues to be an area of discussion and research. eMagin has made improvements in OLED materials and worked towards the development of a better understanding of usable lifetime for operation in a military system. NVESD ran luminance degradation tests of AMOLED panels at 50°C and at ambient to characterize the lifetime of AMOLED devices. The result is a better understanding of the applicability of AMOLEDs in military systems: where good fits are made, and where further development is needed.

Link Analysis in Web Information Retrieval
Author: Monika Henzinger
Mountain View, California

Historically birth outcomes have been relatively unpredictable to physicians and midwives. The development of systems dynamics computer simulation methods provided a tool through which prediction might be realized.

Prevention to any unexpected events is always a good way of avoiding a much greater effect than to wait until the time that it is already at hand. Pregnancy has been one of the most critical issues regarding health and family planning. By predicting consequences, one can better put the resources needed. The goal of prediction is to better allocate treatment resources and to prevent problems from occurring, both in terms of prevention before the fact and detection of problems in an earlier stage than previously possible.

IT was able to stand there hold in producing many technology that do not limit only to the web but instead it made ways that can be applicable even to medical applications. The purpose of this research was to determine the potential usefulness of (Dynamics Computer Simulation Model) DSM for psychosocial and biomedical research and on predicting which women are at risk for premature labor

The paper presented two ways in predicting and testing women who are suspected of pregnancy. One is the traditional way called Discriminant function analysis (DFA) that contains entry for evaluation such as race, drinking and other intakes of drugs of the previous years before pregnancy. DFA is compared to DSM which uses the simulation modeling in showing the possible cases of pregnant women. By creating these models, one can easily assess there behaviors as to change it to avoid unnecessary risks. Demonstrating this kind of simulation model can predict outcomes and can help women to reach there realization on giving up some practices to have a better birth carriage.

Studying and Treating Schizophrenia Using Virtual Reality: A New Paradigm
Daniel Freeman
Department of Psychology, Institute of Psychiatry, King's College London, Denmark Hill, London, SE5 8AF, UK


Understanding schizophrenia requires consideration of patients’ interactions in the social world. Misinterpretation of other peoples’ behavior is a key feature of persecutory ideation. The occurrence and intensity of hallucinations is affected by the social context. Negative symptoms such as anhedonia, asociality, and blunted affect reflect difficulties in social interactions. Withdrawal and avoidance of other people is frequent in schizophrenia, leading to isolation and rumination. The use of virtual reality (VR)—interactive immersive computer environments—allows one of the key variables in understanding psychosis, social environments, to be controlled, providing exciting applications to research and treatment. Seven applications of virtual social environments to schizophrenia are set out: symptom assessment, identification of symptom markers, establishment of predictive factors, tests of putative causal factors, investigation of the differential prediction of symptoms, determination of toxic elements in the environment, and development of treatment. The initial VR studies of persecutory ideation, which illustrate the ascription of personalities and mental states to virtual people, are highlighted. VR, suitably applied, holds great promise in furthering the understanding and treatment of psychosis.

Actually, I saw this research on the cable TV channel. Scientists are studying the use of virtual reality to treat people suffering from Schizophrenia. We are used to taking medicines or have therapy to heal various illness but using virtual reality to cure a mental illness is a new approach. It makes you think how can something unreal treat a form of psychosis and that isn’t it a bit contrasting in that thought? When we say, virtual reality, it refers to a multidimensional environment in a virtual space while the condition schizophrenia, is a psychotic disorders characterized by distortions of reality and disturbances of thought and language and withdrawal from social contact.
In the study above, they aim to use virtual reality to create a new environment for the patient wherein they can communicate, and act in a simulated environment. Of course, this simulated environment tend to be the same as of in reality. Meaning locations, and conversations are of copied from a particular place. Its like the online games we play, of which the environment are programmed to look like real castles with characters and costumes made for that certain place or a military camp with soldiers carrying guns and hunting down terrorists. Patients are placed into the virtual reality and are expected to communicate with virtual characters. And when they are deemed “cured” then these patients are tested on the real environment itself.

Though we can say that treating these patients with virtual reality is not really sufficient for them to be completely cured but then it makes the biggest step to cure these patients Virtual reality has the ability to present social environment that would trigger for the patients to response equivalent to the real world. So the main point of it is its interaction to the social world.
In this technology, social anxiety and reactions are assessed; short term effects of symptoms could be possibly examined; exposure to there persecutory fears could be helpful to accomplish this study and treatment. This research is really resulted to great and helpful approach to cure an illness with use of tehnology.

sources: http://schizophreniabulletin.oxfordjournals.org/cgi/content/abstract/sbn020
Back to top Go down
View user profile http://woophie.blogspot.com
George Dan Gil

Posts : 30
Points : 34
Join date : 2009-06-23

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyThu Sep 17, 2009 12:40 pm

Green computing: IBM introduces new energy management software
By Manufacturing Business Technology Staff –
Manufacturing Business Technology,
5/26/2008 9:04:00 PM

As part of IBM’s Project Big Green, they have announced new software developed in order to help costumers in maximizing the energy efficiency and reduced cost associated with power and cooling. This latest version of IBM Tivoli Monitoring (ITM) software combines views of energy management information that enable optimization across data centers and facilities infrastructures. Monitoring capabilities offer customers the ability to understand energy usage and alert data center managers to potential energy-related problems and take preventive action. Historical trending and forecasting capabilities enable greater precision in existing environments and energy planning. Autonomic capabilities allow customers to set power and utilization thresholds to help control energy usage. The new software can also help customers handle physical constraints in the data center relating to space, power, and cooling.

This new software of IBM provide not just in data centers but also in non-IT assets such as air conditioning equipment, power distribution units, lighting, and security systems.

IBM will join forces with nine partners to offer IBM's IT management expertise with solutions that will allow customers to monitor and control energy consumption across their enterprise to help reduce power consumption and energy costs and better maintain service levels. The partners include:

• APC and TAC by Schneider Electric:
• Eaton Corporation:
• Emerson Network Power
• Johnson Controls, Inc.
• Matrikon:
• OSIsoft:
• Siemens Building Technologies:
• SynapSense Corporation
• VMware:

Making Money with Articles: Niche Websites
By: Jo Han Mok

By choosing a good place subject to base your website is one of the most important aspects of making money off of your articles. You should take each one of these keywords and use it for the basis of one article on each page. This way, even though you are targeting one specific subject, you will be sure to interest a wide variety of people in that one niche.

The best way to find keywords for your subject is to use a keyword software program. This will generate a list of keywords or phrases that contain your place and will also show you approximately how many people search for each word or phrase. By this you can recognize the articles on which most people preferred. If there are a number of topics that you like, pick the one that you feel would be easiest to start with and then, once that site is built and generating some revenue, you can start another site.

You are never limited in what you can do with niche website marketing, unless you find out that you do not have the marketing skills or the needed funding to make it happen. Otherwise, the sky is the limit!
About the author:
Jo Han Mok is the author of the #1 international business bestseller, The E-Code. He shares his amazing blueprint for creating million dollar internet businesses at: www.InternetMillionaireBlueprints.com

Before You Call a Web Developer, Ask Yourself One Question
By: Susan Daffron

Because we develop Web sites, not surprisingly, the first words we often hear from people are: "I need a Web site." My response is often "why?" The answer to that question can be quite telling. I can almost guarantee that you won't end up with a good Web site if you don't even know why you need one in the first place.

Lots of people waste their time and money on useless websites. The thing is that the website you will be developing should be well treated like business or marketing expenditure. For example, suppose you sell dog treats. You spend a bunch of money printing a brochure that explains why your dog treats are healthier or tastier than the ones at the grocery store. The goal for that brochure is to give people information on all the fabulous benefits of your special dog treats.

In much the same way, your Web site might explain why your dog treats are great. In fact, it might be nothing more than an "online brochure" with a lot of the same information as the paper one. That's a reasonable goal for a new site.

For reasons people go online to find information, to be entertained, or to buy stuff. If your site lets people do one or more of these things, it has a reason to exist. However, unlike your paper brochure, a Web site has only about four seconds to get your message across (according to a recent report from Akamai and Jupiter Research). If you have no clue what information people are supposed to glean from your Web site, neither will your site visitors. Four seconds later, they're gone and they probably won't return.

Your goal should be connected to your business which is the purpose of your website.
When setting Web site goals, it makes sense to think about the visitors you are hoping to attract to the site. Who will be reading it? What do they need to know? Why would they visit your site in the first place? What terms would they type into a search engine to find your site? If you don't have good answers for these questions, you should reconsider the question I asked at the beginning of this article: "Why do you need a Web site?"

Not every business needs a Web site. You know your business better than anyone, so before you pick up the phone to call a Web designer, think about what you want your Web site to do for you and why.
About the author:
Susan Daffron is the President of Logical Expressions, Inc. (www.logicalexpressions.com) and the author of books on pets, web business, computing, and vegetarian cooking. Visit www.publishize.com to receive a complimentary Publishize podcast or newsletter and bonus report."
Back to top Go down
View user profile
Karren D. Adarna

Karren D. Adarna

Posts : 31
Points : 33
Join date : 2009-06-20

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyWed Sep 23, 2009 12:14 pm

Faith in treatment influences efficacy among AIDS patients
Lewis E. Mehl-Madrona, M.D.,Ph.D. and Beth Chan, Ph.D.
California Institute of Integral Studies
San Francisco, California


This paper is created to study how the patients who are suffering AIDS are determined to get well. What is the level of their faith that they will be get cured. They have done this by having one hundred and forty men enrolled in a treatment program and requesting them an alternative therapy for AIDS, consisting of repeated injections of typhoid vaccine. This study is conducted to determine the level of the faith of the patients in a treatment or therapy for AIDS. Patients were interviewed before entry into the protocol and at intervals of every 2 months for 2 years while in the protocol. They assessed the patients' faith in treatment at each contact. After that, Clinic physicians made weekly ratings of the patients' sense of subjective improvement. An effect of faith in treatment upon the course of AIDS was demonstrated. Faith may be important regardless of the efficacy of a treatment and may be the mediating variable which renders statistically ineffective treatments highly effective for those who believe in them.


The research I think is done perfectly and its objectives and significance are clear. The researchers fully demonstrated everything and are able to present necessary data, documents and even graphical representations. I can also say that the topic is timely since AIDS is one the most dangerous and alarming disease today. The paper explains everything.


eWatch: A Wearable Sensor and Notification Platform
Uwe Maurer, Anthony Rowe, Asim Smailagic, Daniel P. Siewiorek
School of Computer Science, Carnegie Mellon University,
Pittsburgh Electrical and Computer Engineering Department,
Carnegie Mellon University, Pittsburgh Computer Science Department,
Technische Universität München, Germany


This paper provides the motivation for developing a wearable computing platform, a description of the power aware hardware and software architectures, and results showing how online nearest neighbor classification can identify and recognize a set of frequently visited locations. The eWatch is a wearable sensing, notification, and computing platform built into a wrist watch form factor making it highly available, instantly viewable, ideally located for sensors, and unobtrusive to users. Bluetooth communication provides a wireless link to a cellular phone or stationary computer. eWatch senses light, motion, audio, and temperature and provides visual, audio, and tactile notification. The system provides ample processing capabilities with multiple day battery life enabling realistic user studies.


This paper describes eWatch, a wearable sensor and notification computing platform for context aware research. The hardware design focused on providing enough computational resources to perform machine learning algorithms locally, while still allowing a comfortable form factor and a battery capacity sufficient for extended user studies.This is the best research paper that I have read. The best in a way that I perfectly understand and the data are provided to make the reader fully understand the study. The researchers included the related works that I think is just right for the review of the study. Everything is perfect; the hardware architecture, the interface and all.

Is Java still secure?
David Chess, John Morar
Virus Bulletin Conference; October, 1999
October 1999


The most recent release of a Java Software Development Kit from Sun comes with a new name and a new version number. The underlying technology and environment are now referred to as the Java 2 Platform (Standard Edition, Enterprise Edition, and so on). The version number of the most recent release is 1.2, which follows along nicely after versions 1.0 and 1.1 of the Java Development Kit. Java 2 is the next logical step in the evolution of the Java technology; don't let the name confuse you. While the Java 2 plugin is the primary way that users can get and use a Java 2 engine today, future releases of Web browsers and other active content systems will presumably come with Java 2 already built in. Netscape, for instance, has announced that version 5 of Netscape Communicator will include a "plugable" Java interface, which will among other things allow replacing the standard Netscape Java engine with the Java 2 engine. Java 2 (version 1.2) has now been released, and with it a new Java security architecture. We discuss the differences between the initial Java security architecture, the interim architectures in popular browsers, and the Java 2 model, including the implications of the new model for Java viruses and Trojan horses. Which previous problems have been solved, what are the security aspects of the new Java features, and what possible holes still remain? We address these and other timely questions. So is Java still secure? The short answer is yes. For unsigned or untrusted applets, the Java security manager still enforces the same sandbox controls that it always has. The occasional security hole in a Java implementation is still discovered, but no exploits are known. There are a number of demonstration "hostile applets" that make annoying sounds, or may cause your browser to crash, but there are no viable Java viruses or actively malicious Java-based attacks known to the security community.


This about studying about the security state of Java a new versions are being published. The Java 2 security system provides more granular control than did previous versions of Java, but the range of available function in the sandbox is essentially the same. Consequently, there is no fundamentally new danger of viral spread in the new environment. After the discussions and review, an answer came out and it is YES. Java is still secure. he Java security manager still enforces the same sandbox controls that it always has. The occasional security hole in a Java implementation is still discovered, but no exploits are known.


Back to top Go down
View user profile


Posts : 44
Points : 50
Join date : 2009-06-23
Age : 29

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyFri Sep 25, 2009 11:03 am

Read three published scientific papers (of varying quality) and write a short report for each of them.

Scientific Research 1

Does Internet Always Intensify Price Competition?
Rajiv Lal, Miklos Sarvary (April 1998)
Graduate School of Business, Stanford University

The research paper discusses the primary role of the Internet on different kinds of business practices. It tells that internet is a significant factor in the price competition in the global market. It cited two important attributes that must be recognized in order to distinguish the difference in buying in the store from buying in the internet, namely digital and non-digital attributes of the products or the services. It also provides formulas which are used in presenting the probability of the effects of the internet in various aspects of business practices. It uses direct observations and statistics in derivation of the conclusion.

Upon reading, I realized that internet doesn’t always intensify price competition. Yes, it plays an important role in business market but it does not necessarily mean that there is an assurance that there are lots of people shopping in the internet. Even though there are lots of advantages given by the internet, evaluating the product’s quality cannot be made in the internet since it just provides description of the product which affects the evaluation of fair pricing.


Scientific Research 2

The Promise of Information Technology in the Travel Industry
Brenda L. Dietrich, Jane L. Snowdon and Joann B. Washam

The study showed how information technology would make a big difference in the travel industry. It cited three applications, the network computing, combination of electronic ticketing and smart cards and corporate travel management, which will transform the industry. It aims to make known that information technology can help in the survival of the travel industry in the midst of increasing global forces and how it would change the lifestyle of those people who will adapt the new world driven by information technology.

After reading the title alone, it arouses my interest because information technology was really making such a big step to those companies particularly to those travel industries to remain competitive. It made those transactions related to travel more convenient, easy to access and less time-consuming. And it also help companies reduce their production cost while at the same time increasing their production cost.


Scientific Research 3

Usability and Open Source Software
David M. Nichols and Michael B. Twidale*
Department of Computer Science
University of Waikato, Hamilton New Zealand

The study reviews the existing evidence of the usability of open source software and discusses how the characteristics of open-source development influence usability. It describes how existing human computer interaction techniques can be used to leverage distributed networked communities of developers and users to address issues of usability.

When I was half way reading this paper, I was thinking that the development of open-source software overlooked the importance of good usability. But when I was done reading, it made me conclude that open source community was just increasing its awareness of the usability issues. Improvements in the usability of open source software do not necessarily mean that such software will displace proprietary software from the desktop. Improved usability is a necessary condition for such a spread of factors involved.

Back to top Go down
View user profile http://authenticallyblack.blogspot.com/
angel mae b. brua

angel mae b. brua

Posts : 38
Points : 46
Join date : 2009-06-23
Age : 30
Location : Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyMon Sep 28, 2009 11:27 am

Growth and Mineralization Characteristics of
Toluene and Diesel-Degrading Bacteria
From Williams Refinery Groundwater

Laura Daum
West Valley High School
Fairbanks, Alaska
March 4, 2004

Through bioremediation, naturally occurring bacteria can help restore the water quality of aquifers contaminated by leaking petroleum tanks. This study actually is to determine the optimal growth temperature and mineralization rate (rate at which the bacteria turn hydrocarbon into water and CO 2 ) for hydrocarbon-degrading bacteria from 4.5 ºC contaminated groundwater under Williams Refinery in North Pole, Alaska. The study clearly is more on statistical and graphical solution. In order to determine whether the diesel-degrading bacteria could grow anaerobically, a nitrate-reducing culture was prepared using an aerobic diesel-degrading consortium from Williams Refinery in North Pole, Alaska. One mL of the aerobic consortium was added to a sterile container with 300 mL of filtered Bushnell-Haas (BH) broth. After ten drops of diesel were added to the culture, the container was sparged with N 2 for 15 minutes and sealed using paraffin film. This culture was incubated at 11ºC, the optimal temperature for the diesel-degrading bacteria determined last year, for about one month to ensure that all the oxygen in the container had been used up and that the bacteria were now growing anaerobically.

This study actually is to determine the optimal growth temperature and mineralization rate (rate at which the bacteria turn hydrocarbon into water and CO 2 ) for hydrocarbon-degrading bacteria from 4.5 ºC contaminated groundwater under Williams Refinery in North Pole, Alaska. The study is to cater the needs of the people in the said country. The study is really a scientific research one.

Complete Primate Skeleton from the Middle Eocene of Messel in Germany: Morphology and Paleobiology
John Hawks, University of Wisconsin, United States of America
Darwinius masillae represents the most complete fossil primate ever found, including both skeleton, soft body outline and contents of the digestive tract. Study of all these features allows a fairly complete reconstruction of life history, locomotion, and diet. Any future study of Eocene-Oligocene primates should benefit from information preserved in the Darwinius holotype. Of particular importance to phylogenetic studies, the absence of a toilet claw and a toothcomb demonstrates that Darwinius masillae is not simply a fossil lemur, but part of a larger group of primates, Adapoidea, representative of the early haplorhine diversification.
This is a research topic about primate fossils and the study about the past. The presentation is not the presentable and as what I have seen on the article, it somehow lack from facts for the readers. Maybe because, it is not really published for free.

An Organic Metal/Silver Nanoparticle Finish on Copper for Efficient Passivation and Solderability Preservation
Ormecon GmbH, Ferdinand-Harten-Str. 7, Ammersbek, 22949, Germany

A complex formed by polyaniline (in its organic metal form) and silver has been deposited on copper in nanoparticulate form. When depositing on Cu pads of printed circuit boards it efficiently protects against oxidation and preserves its solderability. The deposited layer has a thickness of only nominally 50 nm, containing the Organic Metal (conductive polymer), polyaniline, and silver. With >90% (by volume), polyaniline (PAni) is the major component of the deposited layer, Ag is present equivalent to a 4 nm thickness. The Pani–Ag complex is deposited on Cu in form of about 100 nm small particles. Morphology, electrochemical characteristics, anti-oxidation and solderability results are reported.


The article is not that complete since the only thing that can be read is the abstract of the study.
Back to top Go down
View user profile http://www.gelaneam.blogspot.com
Kate Mariel Dizon

Kate Mariel Dizon

Posts : 58
Points : 71
Join date : 2009-06-19
Age : 27
Location : Davao City, Philippines

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyTue Sep 29, 2009 6:20 pm

Title: Making Grass and Fur Move
Author(s): Sven Banisch

This thesis introduces physical laws into the real time animation of fur and grass. The main idea to achieve this, is to combine shell–based rendering and mass–spring systems.

In a preprocessing step, a volume array is filled with the structure of fur and grass by a method based on exponential functions. The volumetric data is used to generate a series of two dimensional, semi–transparent textures that encode the presence of hair. In order to render the fur volume in real time, these shell textures are applied to a series of layers extruded above the initial surface.

Moving fur can be achieved by laterally displacing these shell layers at runtime. The usage of a mass–spring system to determine this displacement has not yet been tried, mainly, they have been used for the interactive simulation of the dynamical behavior of cloth, or long hair. This thesis shows that mass–spring systems also form an applicable physical model to simulate the dynamics of fur and grass.

For the simulation of a mass–spring system, two numerical solvers are implemented. The first one is based on explicit Euler integration, and the second one is derived from an implicit Euler scheme. This thesis outlines the effects of different numerical solvers on performance and stability.

In order to simulate fur and grass dynamics, different ways of generating masses and springs over the surface are discussed. Six so called mass–spring topologies are introduced and used in animation. Three of them allow that the shell layers separate laterally, and the parting of grass can be simulated. Performance observations prove mass–spring systems to be well–suited for the real time simulation of fur and grass dynamics.

First of all, I would like to say that the paper’s title made me interested in it the first time I read it but it was also misleading. As I read the thesis of Banisch, he thoroughly explained his aim to make the real animation of fur and grass. To achieve this he needs to combine his two ideas which is to combine shell-based rendering and mass-spring systems.

The study may look difficult because it deals with Animation which literally means “to give life to things.” I some pages of his work, he elaborated on how he worked on his thesis. He gave some outputs on working out the grass and fur. He also reviewed some other studies related to his work to get some ideas about the mass spring system and shell-based rendering.

It was nicely done there no was no confusion drawn from his thesis, though, at first it may be difficult because of the animation but as you get through the pages you can easily understand what his concept is all about.



Title: Interactive Visualization and Design of Deterministic Fractals
Author(s): Sven Banisch

This thesis describes an interactive tool for fractal shape exploration and modification. The software implements a fast algorithm for the visualization of fractal structures and combines various techniques useful in the analysis of the underlying dynamical systems.

A deterministic fractal is defined by the attracted region of a discrete dynamical system F 2 R2. Two real valued polynomials, g and h, each of which depends on 13 parameters, determine the dynamic behaviour of this system. F is iterated over and over for initial points in a user-specified range of the real plane. This yields the orbits (iterated point sequences) of these initial points. If these orbits tend to infinity they do not belong to the attracted region. If the orbit converges to some state it belongs to the fractal. This so called escape time algorithm is performed on the GPU in order to achieve fractal visualization in real time.

Interactive visualization of fractals allows that parameter changes can be applied at run time. This enables real time fractal animation. Moreover, an extended analysis of the discrete dynamical systems used to generate the fractal is possible. For a fast exploration of different fractal shapes, a procedure for the automatic generation of bifurcation sets, the generalizations of the Mandelbrot set, is implemented. The software also implements methods for the graphical analysis of 2_D dynamical systems.

In this thesis, it is described how to implement these techniques, but also how to use them in the design of fractal objects. A number of application examples proves the usefulness of the approach. A performance analysis shows that the interactive design of deterministic fractals is feasible on a medium level computer system. Visual results, presented throughout this work, show that the developed tool can be very useful for artistic work.


The thesis is about describing an interactive tool for fractal exploration and modification. In his background and introduction he revealed some of his references for his thesis. Some of his references were math-related studies. His work also tackled algorithm. The paper was really technical and it was confusing because of some difficult-to-understand equations and terms.

I think this will be a good paper to develop for CS and IT students like me because it deals with animation and application of algorithms. Unfortunately, even though I am genuinely interested in animation, it’s something that I know little about.



Title: Assessing the Yield of IT Projects in Developing Nations: Aggregated Models Are Not Sufficient
Author(s): Stephen Ruth, Bhaskar Choudhury


Determination of the outcome of an IT project in a developing nation is often based on sectoral models and highly aggregated data. This paper offers an example of a replicable methodology to go to the grass roots—the user level—to obtain valuable insights from the individual and group data that are masked by the aggregate statistics.

This study is about assessing the yield of IT projects in developing nations. It’s a significant topic, mainly it focuses on IT and the demand to other countries. This study has three action steps for the future:
First, since it is clearly possible to accomplish a replicable study of this type almost anywhere in the world, there should be greater emphasis on gathering this type of data, even at the expense of the aggregate studies. Second, a comprehensive collection of lower level studies of this type needs to be assembled to determine public policy options that may already be justified. Stephen Denning of the World Bank has pioneered a process of sharing information organization-wide. He found that by sharing “stories” about successful implementation ideas learned in one location, he could apply them to other countries around the world. It is quite likely that these results in Romania could assist in Internet deployment plans in Eastern Europe or beyond.


The study’s topic is quite interesting because it’s mainly about improving the Internet in developing nations. Philippines is a developing nation and according to some of my readings, the Internet has quite a low penetration rate in the Philippines compared to other nations. However, we are quickly catching up.

The results of this study may be helpful in determining key factors that would help improve the Internet in developing nations. The study’s scope is quite large and I think it was difficult to gather enough data and come up with accurate results. The paper is quite long and there are particular areas that I do not understand fully. However, I would really like to express my appreciation for the study’s goal because it can change how the Internet is implemented and used in nations worldwide, especially the developing ones.

Back to top Go down
View user profile http://katemarieldizon.blogspot.com/
mariechelle alcoriza

mariechelle alcoriza

Posts : 36
Points : 50
Join date : 2009-06-20
Age : 30
Location : Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyThu Oct 01, 2009 8:36 pm

eWatch: AWearable Sensor and Notification Platform
Uwe Maurer1, Anthony Rowe2, Asim Smailagic3, Daniel P. Siewiorek3
3School of Computer Science, Carnegie Mellon University, Pittsburgh
2Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh
1Computer Science Department, Technische Universität München, Germany

Paper Abstract:
The eWatch is a wearable sensing, notification, and computing platform built into a wrist watch form factor making it highly available, instantly viewable, ideally located for sensors, and unobtrusive to users. Bluetooth communication provides a wireless link to a cellular phone or stationary computer. eWatch senses light, motion, audio, and temperature and provides visual, audio, and tactile notification. The system provides ample processing capabilities with multiple day battery life enabling realistic user studies. This paper provides the motivation for developing a wearable computing platform, a description of the power aware hardware and software architectures, and results showing how online nearest neighbor classification can identify and recognize a set of frequently visited locations.

The eWatch reminds me of a mobile phone. How I wish that mobile phones can be attached to a watch. All you just have to do is to put on your wrist and just wait for the phone calls and text messages to come. As I have read the paper, one thing that took my attention is that it can locate a person (that is using a eWatch) that is similar to a GPS. In the paper, it states there that eWatch would be very beneficial to patients. Since, eWatch senses light, motion, audio and tactile notification. An eWatch can sense if the user is in distress and then query to confirm that it is an emergency. The use of online learning could profile the patient’s daily activity and notify the caretaker if a patient no longer performs their daily activities. The eWatch can also notify a patient when they should take their medication. (taken from the actual paper).

I myself is truly amazed with this technology. This would be very helpful in our elderly people; with the help of this technology it could lessen the chances that our elderly will get lost since we can track them through the use of an eWatch. One thing I have learned as I read the paper is that every location has its own unique have location noises such as the noise of computers, television and even traffics that came from the cars. Also, the GUI has a calendar that can monitor and can add, edit and delete schedules on the specific dates. This would help the very busy people remind them of what they have to do.

Anyway, with regards to the paper, there are terms that need to be explained more or they just have to put a glossary of terms so that the readers can understand more about what they are talking about. One thing I’ve also liked about is that they’ve shared about the hardware specifications as well as the screenshots of the said technology. In addition, they have stated that they will be having a future work. I just hope that there would be a technology like that here in the Philippines, it would truly benefit us.

The Effect of Bandwidth Software on the Business
Muhammad Azeem Ashraf
July 13, 2009


In every human being in the world, computers and technology is used in all parts and fields of their life. In this paper, bandwidth software is introduced into the market. The bandwidth software would provide solutions to the small and large businesses that use internet service. Since most business today, do their advertising over the web so all the clients need to do is to sit comfortably in their homes and do the shopping online.
The paper stated that the bandwidth software helps an individual to have a check and limit the speed of the internet that is being used and provided to its users. The bandwidth software manages the traffic and speed of the internet that is being provided. Also, if one would use bandwidth software to its business, it would help them in different ways that include the following:
o Reduces the cost of the internet by not providing to the each user separately
o Secure the sites that you don’t want to be accessed
o Each user can increase or decrease the bandwidth according the rate of transfer of data and the speed he needs to perform his tasks.
o This bandwidth software also helps all the computers safe from any viruses or threats
Thus the bandwidth software plays a significant role in the growth of the business. It can reduce their cost and more efficiently use that cause for other purpose that may increase its profit and in return leads that business towards growth and goodwill.


Yes, it’s true that bandwidth software can give benefits to a business but how about schools and universities? I was just wondering if this software is applicable in our school. One thing that I have noticed is that the paper is too general that it should suggest as to what type of businesses this software can be applied into. Also, one thing I would suggest is it should specify the possible prices that the software could range.
Computers May Slowly Be Turning Green
Jon Norwood, October 8, 2007

The paper discusses the new trend of eco-friendly computers being manufactured, as well as their possible impact on Global warming. A PC draws between 200 and 700 watts when left on. It is also more than a DVD Player or a TV. 600 watts over time will add considerably to an electric bill and it will send a great deal of carbon dioxide into the atmosphere over the course of the year.

Europe has begun to take the computer disposal problem seriously. Several regulations are currently being established to help promote the concept of “Green PC”. In fact in America, the Dell Inc. pledged to plant a tree for every PC they sell that begins in 2008.

The PC World is currently working on a carbon neutral PC that focuses primarily on decreasing the power requirements of the computer which means less carbon dioxide in the atmosphere. The manufacturing process PCs go through is the real environmental concern as on average 10 times the weight of the PC in raw materials will be burned through during its creation. This is a problem bigger than the US as a great deal of hardware used in American computers is imported. Companies such as Dell and Gateway will have to become more mindful of where the parts they are using come from, and perhaps refuse to purchase from firms that are not environmentally friendly. This would more than likely increase the cost of computers immediately , but that may be a necessary change.


It is good to know that there are Computer Manufacturers would initiate towards a greener and healthier environment. On my part, writing the said paper would also help people be aware of what is really happening right now. It seems that we’ve seen the effects of Global Warming today, even if there’s only a day of rain pouring but floods would truly be there, affecting lots of people.
On my own opinion, to really solve global warming, all of us even ordinary citizens should do our part if we do not want to let these things get worst as possible. We should all take step. Haha!
Back to top Go down
View user profile


Posts : 30
Points : 39
Join date : 2009-06-19
Age : 30
Location : davao city

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: assignment 7   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyFri Oct 02, 2009 2:22 am

Japanese researchers downplay super CPU effort
By Vivian Yeo, ZDNet Asia
Tuesday, September 29, 2009 06:40 PM

A group of Japanese researchers are collaborating on a software standard for multicore processors to be used in a range of technology products, including mobile phones and in-vehicle navigation systems. The effort could lead to the development of a super CPU, according to the researchers.

The alliance includes big names such as Fujitsu, Toshiba, Panasonic, Renesas Technology, NEC, Hitachi and Canon ( CAJ - news - people ). All have agreed to pool their resources together to create a new, standardized, power-saving central processing units (CPU) which could be used within the entire industry for a wide range of consumer electronics by the end of fiscal 2012, the evening edition of Nikkei Business News reported Thursday. According to the plan, each of the Japanese chip makers will produce their own CPU that is compatible with the energy-saving software invented by Kasahara.A group of engineers will then create a prototype that runs on solar cells that will use less than 70% of the power consumed by normal ones. The new CPU could still be able to function even when there are prolonged periods of power shortage during natural disaster, the Nikkei added.

The super Japanese CPU could be incorporated into different brands of televisions, digital cameras, and other electronic appliances.

If realized and will be adopted in a broad range of consumer electronics, the dominance of Intel today will be challenge. Having a universal standard and software format that could be used in various appliances may help save companies in research and development costs. The project appears to be aiming to create processers that can draw all the power they need from solar panels and can be used to power a new range of mobile devices that never need a main connection.

Source: http://www.zdnetasia.com/news/hardware/0,39042972,62058168,00.htm

New Information System for Blind and Visually Impaired
Computer Scientists at Freie Universität Berlin Start Field Trials
No. 241 issued on 09/16/2009

The artificial intelligence group at Freie Universität Berlin, under the direction of the computer science professor Raúl Rojas, has developed a new type of information system for blind and visually impaired individuals. Field trials are being carried out to optimize the device for future users. During the next six months it will be tested by 25 persons. The artificial intelligence group at Freie Universität is collaborating with a research group at the Telekom Laboratories headed by Dr. Pablo Vidales and the Berlin Association for the Education of the Blind and Visually Impaired e.V. The joint project is called InformA. After completion of the field trials, it will receive funding from the German Federal Ministry of Education and Research through its EXIST seed funding program for university-based business start-ups. In addition, IBM Germany is providing funding for further development of the device at Freie Universität.

If this project will be pursue, the information that will be provided by InformA device can also be the interest of people who doesn’t have previous experience with computers which may not have had a chance to access the information offered in the internet today.
Source: http://www.rehacare.de/cipp/md_rehacare/custom/pub/content,lang,2/oid,22988/ticket,g_u_e_s_t/~/New_Information_System_for_Blind_and_Visually_Impaired_People.html

Continuing Education and Knowledge Retention: A Comparison of Online and Face-to-Face Deliveries
by Connie Schardt and Julie Garrison
A systematic search of the research literature from 1996 through July 2008 identified more than a thousand empirical studies of online learning. Analysts screened these studies to find those that (a) contrasted an online to a face-to-face condition, (b) measured student learning outcomes, (c) used a rigorous research design, and (d) provided adequate information to calculate an effect size. As a result of this screening, 51 independent effects were identified that could be subjected to meta-analysis. The meta-analysis found that, on average, students in online learning conditions performed better than those receiving face-to-face instruction. The difference between student outcomes for online and face-to-face classes—measured as the difference between treatment and control means, divided by the pooled standard deviation—was larger in those studies contrasting conditions that blended elements of online and face-to-face instruction with conditions taught entirely face-to-face. Analysts noted that these blended conditions often included additional learning time and instructional elements not received by students in control conditions. This finding suggests that the positive effects associated with blended learning should not be attributed to the media, per se. An unexpected finding was the small number of rigorous published studies contrasting online and face-to-face learning conditions for K–12 students. In light of this small corpus, caution is required in generalizing to the K–12 population because the results are derived for the most part from studies in other settings (e.g., medical training, higher education).

This doesn’t mean that we will be saying goodbye to classrooms but research and reports suggest that online education could expand sharply over the next few years. The real promise of online education as experts say is providing learning experiences that are more tailored to individual students than is possible in classrooms. In this way students may find it more engaging since they are enabled to learn by doing.
Source: http://www.eblip4.unc.edu/papers/Schardt.pdf
Back to top Go down
View user profile
Melgar John Gascal

Melgar John Gascal

Posts : 13
Points : 16
Join date : 2009-06-19
Age : 30

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyFri Oct 02, 2009 10:39 am

Website Load Testing- Why and How to do it
By: Alexander Golishev

The article explains the necessity of doing website load testing. It also provides detailed information on kinds of website performance tests and website load testing in particular. The second part of the article is a how-to section which can help choose a website load testing software to protect the website and your business.

Website load testing is a part of performance tests that every web application should undergo every once in a while to make sure it delivers maximum quality operation. The website performance tests include benchmark tests which check the website’s performance under minimum loads, stress tests which test the web’s application behavior in extreme load conditions and load tests whose goal is to check the website’s performance under loads which are significantly above average.

The process of website load testing is done by putting an unusually high load on the system in order to see which part of it fails. These tests can help find bottlenecks in the website’s performance and eliminate them before they compromise you and your business.

The later part of this article explains how to do website load testing. With a good website load testing software, performance tests are based on real human activity and not with machine generated tests. Tests results must be presented in the most clear and comprehensive manner with graphs and reports covering every aspect of the website performance with a thorough analysis of its bottlenecks, errors and other relevant information.

Computer Radiation
By: Danielle Barone
Reference: http://www.bellaonline.com/articles/art49755.asp

This article tells about the risks associated with long term exposure to radiation. These includes the increase in risks of all forms of cancer, tumors, blood disorders, miscarriage, headaches, insomnia, anxiety, aging of the skin, skin burn, etc. Radiation exposure over time can cause skin burn, dry wrinkled skin and photo aging. This skin damage is identical to sun damage and causes the same health problems. Many electronic products that we use on a daily basis expose us to harmful radiation. One of these electronic products that highly emit radiation is the computer. Most people do not realize the harm that radiation can cause to the human body, even at low levels. It is also not a widely advertised problem because it would negatively affect industry and the economy as a whole. For now, many manufacturers are improving products to emit less radiation, and great technological improvements have been made in the last five years alone.

How Computer Viruses Work
By Marshall Brain
Reference: http://computer.howstuffworks.com/virus1.htm

Computer viruses are called viruses because they share some of the traits of biological viruses. A computer virus passes from computer to computer like a biological virus passes from person to person.
People write computer viruses. A person has to write the code, test it to make sure it spreads properly and then release it. A person also designs the virus's attack phase, whether it's a silly message or the destruction of a hard disk. Why do they do it?
There are at least three reasons. The first is the same psychology that drives vandals and arsonists. Why would someone want to break a window on someone's car, paint signs on buildings or burn down a beautiful forest? For some people, that seems to be a thrill. If that sort of person knows computer programming, then he or she may funnel energy into the creation of destructive viruses.
The second reason has to do with the thrill of watching things blow up. Some people have a fascination with things like explosions and car wrecks. Creating a virus is like creating a bomb inside a computer, and the more computers that get infected the more "fun" the explosion.
The third reason involves bragging rights, or the thrill of doing it. If you are a certain type of programmer who sees a security hole that could be exploited, you might simply be compelled to exploit the hole yourself before someone else beats you to it.
Of course, most virus creators seem to miss the point that they cause real damage to real people with their creations. Destroying everything on a person's hard disk is real damage. Forcing a large company to waste thousands of hours cleaning up after a virus is real damage. Even a silly message is real damage because someone has to waste time getting rid of it. For this reason, the legal system is getting much harsher in punishing the people who create viruses.
Back to top Go down
View user profile http://glamerj.blogspot.com

Posts : 23
Points : 25
Join date : 2009-06-22
Age : 30

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyFri Oct 02, 2009 11:28 am

A Software Flaw Taxonomy: Aiming Tools At Security
Sam Weber, Paul A. Karger, Amit Paradkar

IBM Research Division
Thomas J. Watson Research Center
P. O. Box 704
Yorktown Heights, NY 10598, USA

In the paper, the researchers discuss about the software flaw taxonomy and the need to have this. Software flaw taxonomy is an ordered system that indicates the natural relationships of security flaws. It organizes the problem space and is useful for tool-builders to target their technology on a rational basis. In order to create security flaw taxonomy, they combined the previous efforts to categorize security problems and incident reports., correlated the taxonomy with the available information about current high-priority security threats, and observed the results.

As what we have learned in our CS Research Methods subject, the abstract should be concise and clear for the readers to understand. And I would like to say that the paper achieved the said criterion. The research is also very interesting for me since the researchers of the study aimed to develop a software flaw taxonomy in which it categorizes the security problems and assists software security evaluations which is helpful when working on a software security tooling effort. Although it may seem simple for others but as what Jeff Skousen says that the most elegant research is usually simple and direct.


PlanSP: A Framework to Automatically
Analyze Software Development and
Maintenance Choices

Biplav Srivastava

IBM Research Division
IBM India Research Lab
Block I, I.I.T. Campus, Hauz Khas
New Delhi - 110016. India.

A piece of software is made up of components or modules and these components are also made up of smaller sub-components. When managing a software project, it involves tracking of the development and maintenance of the individual components. Given these, managing the components and monitoring their evolution during the life cycle of the software is not that easy. The researchers have observed this problem and come up with an idea of resolving the problem by introducing the PlanSP, an automated decision-support framework for software development and maintenance. PlanSP can analyze different choices and assists the user in making cost-effective decisions. To demonstrate that the PlanSP is both useful and practical for software project management, what they did is that they have built a proof concept prototype.

First, I would like to say that the research is very interesting. True that PlanSP would be very useful and practical for software project management. I do agree that managing and maintaining software components is not that easy especially when a lot of modules are to be managed and maintained. With regard to the paper, the way the information is presented is good enough. It is a good thing that the researchers included example scenarios for the readers to really understand.


Service Quality Evaluation Method for Community-based Software Outsourcing Process

Huimin Jiang, Alice Liu, Zhongjie Wang, Shu Liu

School of Computer Science and Technology
Harbin Institute of Technology
Harbin 150001
P.R. China

China Research Laboratory
Building 19, Zhouguancun Software Park
8 Dongbeiwang West Road, Haidian District
Beijing, 100094
P.R. China

In the paper, the researchers discuss about outsourcing the software development and aims to evaluate the quality service of outsourcing software management and to identify what service quality to focus. True that outsourcing the development of software, reduces software development cost and improves the quality of the software. However, not all companies have chosen to outsource. To evaluate the quality service in the managing community-based software outsourcing process, they presented a method. In addition to the evaluation, the quality services of the following three types of objects are needed to be evaluated: service behaviors, service products and service providers. There are five dimensions of quality indicators (time and efficiency, price and cost, quality of service content, resources and conditions, reputation and risk) for each of the objects. The researchers adopted the AHP method to calculate the total quality of each type of object. When evaluating the quality of outsourcing software management service, objective and dynamic service quality has been suggested.

What I noticed first is that the researchers were not able to discuss or explain more about outsourcing. The paper is using acronyms in some of the sections without defining the whole term/word. It can be confusing for the readers in reading the entire paper without knowing the whole word of the acronym in which the readers may interpret the information incorrectly. However, it is good that the researchers included frameworks, formula, graphs and interpretations of the situation and the information. It is also good that the researchers came up with the research because outsourcing issues has been very controversial ever since before and up to now.

Very Happy
Back to top Go down
View user profile
Sherwin S. Gaum

Sherwin S. Gaum

Posts : 30
Points : 34
Join date : 2009-06-23

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Assign 7   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyMon Oct 05, 2009 2:21 pm

Measuring the Immeasurable: Visitor Engagement

Eric T. Peterson
Web Analytics Demystified
Joseph Carrabis
Next Stage Group

Engagement is an estimate of the degree and depth of visitor interaction on the site against a dearly defined set of goals. Not to mention that it is one of the hottest buzzwords in digital advertising and marketing. In the web analytics and measurement community, few issues in 2007 generated more interest and debate than the conversation about measuring visitor engagement.
A lot of companies written and even founded a way to measure an engagement, and as a matter of fact it creates a lot of arguments spawned just seeking a reasonable working definition of the term to apply in a meaningful way to the online channel. Although a lot of the companies have the interest on it, only few developed it and made a practical and useful measure of engagement that can be applied to the advertising, marketing, and technology investments made annually on the internet. Evolution technology was their solution and it has been existed and while the solution exist most of the relatively known and some are not easily integrated with the widely deployed digital measurement solutions in the marketplace today.
In 2007 the primary author of this research, saw an opportunity to create a concrete measure of engagement that: first, can be deployed throughout individual organizations. Second, it can add value to the business’s current understanding of visitor behavior and lastly, provide additional evidence for the need to make new substantive changes to the web site and other digital marketing efforts.
Even though a lot of solutions are scattered such as Omniture, Coremetrics, Web Trends and Unica, the author believed that to be able to create a measurement in engagement it should have the practical to calculate using commonly available technology and it is applicable to the points of leverage currently available to the online marketers.
All in all the paper describes that an operational measurement of micro-engagement designed to advance the marketers’ knowledge of visitor interaction and support increasingly complex investments into advertising, marketing, content deployment, and technology, especially in situations where traditional measures like conversion are unavailable, impractical, or inappropriate. This document also tackles the measurement of Visitor Engagement, including the metrics calculation and use.


Amroy Europe Oy

Several studies have conducted considerable enhancements in mechanical and electrical properties in CNT – epoxy nanocomposites compared to neat epoxy. Given the example of dispersing of 0.2 – 10 wt % CNTs into epoxy resin results in modulus increase up to 50% and strength increase up to 18%. There are many studies and come up into good solutions with the proper dispersion of CNT. The solution given by the researcher has something to do with the solution on the difficulties of dispersing CNTs into resin by proper mechano – chemical treatment of carbon nanotubes. They propose a solution that called HYBTONITE®️- technology in which they are dispersing CNTs into several resins and curing agents with scalable and cost effective processes. A solution which they say is a patented one.
Although a lot of solution are scattered in the market place, their stated solution has differences in the other CNT dispersions. Aside from the fact the mechano-chemical process makes CNT dispersion easy, it also attaches functional chemical groups covalently to the CNTs. When epoxy resin is cured, these functional groups take part in polymerization process creating a strong hybrid structure betweeb matrix and CNT’s.

Securing electronic commerce: A briefing paper

Information Security Forum (ISF)

The paper provides a high-level of the security aspects on conducting successful electronic commerce. It distils the views and experience of 48 ISF member organizations, plus the research conducted by the project team. Basically they aimed at those developing electronic commerce applications for their own organizations, which it provides the reader with an understanding of the characteristics of e-commerce. But of course on how it works in practice today. It also tackles the security issues that are inherent when developing e-commerce applications and made a security solutions that can be used to address the issues.
Back to top Go down
View user profile
Mary Rossini Diamante

Mary Rossini Diamante

Posts : 15
Points : 17
Join date : 2009-06-27
Age : 29

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Assignment 7   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyTue Oct 06, 2009 11:53 am

Expected Environmental Impacts of Pervasive Computing
Andreas Kohler and Lorenz Erdmann
Swiss Federal Laboratories for Materials Testing and Research, EMPA, St. Gallen, Switzerland
Institute for Futures Studies and Technology Assessment, Berlin, Germany

Present ICT has already become a serious threat to the environment. Three types of environmental risks or hazards caused by ICT products and infrastructures can be discerned: global resource depletion, energy use, and the emission of toxic substances over the lifecycle (production, use, disposal).

Pervasive Computing will bring about both additional loads on and benefits to the environment. The prevailing assessment of positive and negative effects will depend on how effectively energy and waste policy governs the development of ICT infrastructures and applications in the coming years. Although Pervasive Computing is not expected to change the impact of the techno sphere on the environment radically, it may cause additional material and energy consumption due to the production and use of ICT as well as severe pollution risks that may come about as a result of the disposal of electronic waste.

As Pervasive Computing encroaches upon more and more segments of our daily life, leading to an ever-higher indulgence of ICT components, environmental effects occur over the whole lifecycle of the products affected. This is to note that the wide continuing development of computing can greatly affect the environment’s condition.

The Anti-Virus Strategy System
Sarah Gordon, 1995 Virus Bulletin
Anti-virus protection is, or should be, an integral part of any Information Systems operation, be it personal or professional. However, our observation shows that the design of the actual anti-virus system, as well as its implementation and maintenance, can range from haphazard and sketchy to almost totally nonfunctional.
While systems theory in sociological disciplines has come under much attack, it has much to offer in the management of integration of technological applications into daily operations. We will examine the 'anti-virus' strategy (Policy, Procedure, Software [selection, implementation, maintenance]), focusing on areas where the 'system' can fail. We will address this interaction from a business, rather than a personal computing, point of view.
The Anti-Virus Strategy System will examine anti-virus strategies from a Holistic General Systems Theory perspective. By this, we mean that we will concern ourselves with the individual parts of the system, their functionality, and their interaction. We will draw from various IT models specifically designed to provide a holistic, forward-thinking approach to the problem, and show that for our strategy to flourish, we must concern ourselves with the system as a whole, not merely with its individual components.
It is clear that the traditional approach in these problems is not working. It has been applied for a long time and the problems are not going away. Drawing from the holism model, one thing that can be done is examine causal factors, instead of focusing on symptomatic relief. We need to examine more closely the interdependence of the parts of our system, and as security professionals, should facilitate the potential for healing our systems. It is hoped that some of the ideas mentioned in this paper can provide a starting point for this. And that these problems will be resolved in the forthcoming time.

The Current State of Computer Science in U.S. High Schools: A Report from Two National Surveys
Judith Gal-Ezer, The Open University of Israel
Chris Stephenson, Computer Science Teachers Association

This paper addresses the results of two surveys conducted by the Computer Science Teachers Association between 2004 and 2008. The purpose of these two surveys was to collect foundational data about the state of high school computer science education in the United States. The paper provides a wealth of information with regard to the types and content of courses offered, trends in student enrollment (including gender and ethnic representation), and teacher certification and professional development. The results of these studies are consistent with current research pointing to issues of concern in such areas as the number of schools offering computer science courses, the engagement of underrepresented student populations, and the availability of professional development opportunities for computer science teachers.

One of the challenges we face when discussing computer science education is that the field of computer science seems to evolve so quickly that it is difficult, even for computer scientists, to clearly define its contents and prescribe its boundaries. While we do know that computing now provides the infrastructure for how we work and communicate and that it has redefined science, engineering, and business, it is still poorly understood by those outside the field. Computer science is a means of wider discoveries and development in sciences and other fields. The data provided in these studies indicates an ongoing inability of the discipline to attract a diverse population of students, which is a particularly serious concern in countries where the need for highly skilled computer scientists is already outstripping the number of students in the academic pipeline. This data provide insight into high school computer science education globally.
Back to top Go down
View user profile
John Deo Luengo

John Deo Luengo

Posts : 20
Points : 22
Join date : 2009-06-20
Age : 30
Location : Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyTue Oct 06, 2009 2:36 pm

Web Password Hashing
Stanford University
Dan Boneh
Collin Jackson
John Mitchell
Nick Miyake
Blake Ross

The Common Password Problem. Users tend to use a single password at many different web sites. By now there are several reported cases where attackers breaks into a low security site to retrieve thousands of username/password pairs and directly try them one by one at a high security e-commerce site such as eBay. As expected, this attack is remarkably effective.

A Simple Solution. PwdHash is an browser extension that transparently converts a user's password into a domain-specific password. The user can activate this hashing by choosing passwords that start with a special prefix (@@) or by pressing a special password key (F2). PwdHash automatically replaces the contents of these password fields with a one-way hash of the pair (password, domain-name). As a result, the site only sees a domain-specific hash of the password, as opposed to the password itself. A break-in at a low security site exposes password hashes rather than an actual password. We emphasize that the hash function we use is public and can be computed on any machine which enables users to login to their web accounts from any machine in the world. Hashing is done using a Pseudo Random Function (PRF).

Phishing protection. A major benefit of PwdHash is that it provides a defense against password phishing scams. In a phishing scam, users are directed to a spoof web site where they are asked to enter their username and password. SpoofGuard is a browser extension that alerts the user when a phishing page is encountered. PwdHash complements SpoofGuard in defending users from phishng scams: using PwdHash the phisher only sees a hash of the password specific to the domain hosting the spoof page. This hash is useless at the site that the phisher intended to spoof.


In most cases, we usually use only single password everytime we signed up a website account whether it is a social networking site, online shopping account or even our email account. The effect it is more at ease for some attackers to hack our accounts. The Stanford people created a browser extension which is called PwdHash. It automatically replaces the contents of these password fields with a one-way hash of the pair (password, domain-name). As a result, the site only sees a domain-specific hash of the password, as opposed to the password itself. A break-in at a low security site exposes password hashes rather than an actual password. It is really helpful for those people whose account needs to have a high security and protection.


Province-Wide Patterns of Internet-Mediated Friendship
University of the Philippines Los Baños
Chezka Camille P. Arevalo and J. P. Pabico

Online social networking has become of great influence to Filipinos, wherein the sites Friendster, Youtube, Facebook and Myspace are among the most well known. These social networking sites provide a wide range of services to users from different parts of the world, such as connecting and finding people, as well as, sharing and organizing contents. The popularity and accessibility of these sites enable personal information to be publicly available, which allows the study of a population's characteristics in a wider scale. A computer program was developed to extract the demographic data and friendship patterns of Friendster members among residents of Laguna. The program was also used to infer the structure of the resulting friend-of-a-friend (FOAF) network.

Based on the demographic analysis, results show that:

1. The FOAF network is dominated by young, single, female participants;
2. Homophily in age preference is observed among members as they prefer to be friends with people of the same age;
3. Heterophily in gender preference is observed as friendship among individuals of the opposite gender occurs more often; And,
4. half of the single members prefer to be friends with those who are also single while the other half make friends with people who are already in a relationship.
Based on the FOAF network analysis, it was found that:
1. The FOAF network is well-connected and robust to removal of a person from the network;
2. The network exhibits a small-world characteristic with an average path length of 5.2 (maximum=16) among connected members, shorter than the well-known “six degrees of separation” findings by Travers and Milgram in 1969;
3. The network exhibits scale-free characteristics with heavily-tailed power-law distribution (P = -1.9 and R2 = 0.89) suggesting the presence of many members acting as the network hubs;
4. Clustering coefficients ranging from 0.0352 to 0.1824 suggest a weak interconnectedness between friends;
5. The average number of friends per person ranges from 70 to 80;
6. New members exhibit preferential attachment to persons with high number of friends;
7. The average separation increases over time, suggesting that the interconnectedness of each person gets weaker as time passes;
8. The largest cluster decreases through time; And,
9. the average number of friends decreases through time which shows that users lose more friends than acquire new ones.


This research focuses on developing software to extract the demographic data and friendship patterns of Friendster members among residents of Laguna. The program was also used to infer the structure of the resulting friend-of-a-friend (FOAF) network.


Capturing the Dynamics of Pedestrian Traffic Using a Machine Vision System
University of the Philippines Los Baños
L.V.A. Ngoho and J.P. Pabico

The proponents developed a machine vision system to automatically capture the dynamics of pedestrian traffic in different scenarios. The system processes image sequences to track the pedestrians, considering each pedestrian as an object. The distance of each tracked object from its original position is computed every frame. The velocity and acceleration are later derived. The quantified motion characteristics of the pedestrians are displayed through graphs. Also, the processed images with the actual tracking of pedestrians are shown in the output images that were converted into video. Lastly, the behavior of the pedestrians in different pedestrian traffic scenarios is visually quantified using graphs of the kinematics of motion.


This research focuses on developing a system, Machine Vision System that automatically captures the dynamics of pedestrian traffic scenarios.

Back to top Go down
View user profile
ermilyn anne magaway

ermilyn anne magaway

Posts : 22
Points : 32
Join date : 2009-06-19
Age : 31
Location : Sitio Bulakan Brgy. Aquino Agdao Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyWed Oct 07, 2009 3:25 am

Protect Yourself Against Banking Crimeware
Crimeware Gets Smarter - How To Defeat Zeus, Fragus, and Others
by: Joe Poniatowski

Malware continues to evolve. Increasingly, it is motivated by greed as opposed to the "good old days" when computer viruses were developed as practical jokes, for fame and glory, or at worst, for anarchistic destruction. To top it off, today's cyber-criminal doesn't even have to be particularly smart or technical. With the prevalence of crimeware kits and managed crimeware as a service, the crooks have access to a wide range of attacks, from specialized trojans to botnet control frameworks to customizable exploits.

Crimeware Costs to Victims

In one recent example, a construction company in California with an infected PC lost $447 thousand dollars in approximately 30 minutes. In a separate incident, a New York based solid waste management company lost $150 thousand. Computer based crimes cost businesses world-wide an estimated $26 billion last year.

Antivirus Software Isn't Enough

According to investigations conducted by Trusteer, the majority of PCs infected with the Zeus trojan are running up-to-date antivirus software. Zeus is currently the number one crimeware kit in the world, but it is just one of many. Clearly, anti-malware software alone is insufficient protection against modern criminal attacks.
What About Two-Factor Authentication?

Two-factor authentication is an approach designed to make it impossible for crooks to impersonate legitimate users. It's usually implemented with a small, keychain-sized device which displays a 6 digit code.

Every minute, the code changes to a new, random number. Users logging in to a secure system must not only enter their ID and password, but the current code from the display. That way, even if a hacker gets the password, he still can't log on without the current code.

Why Two-Factor Authentication Isn't Enough

Two-factor authentication can indeed prevent a hacker from impersonating a legitimate user from another computer. Unfortunately, modern banking trojans like the ones offered by the Zeus crimeware kit allow the hacker into the secure system by "piggy-backing" on a connection with a real user.

This was the case with the construction company breach mentioned above. While the user was making legitimate transactions, the hacker, by going through the same connection, was transferring money to co-conspirators.

Surefire Way to Prevent Infection

So if antivirus software and two-factor authentication are not enough to ensure safe on-line transactions, what is? The safest method appears to be to boot with a "Live CD," one created with an operating system, a web browser, and whatever software is needed to conduct on-line business. By using a non-rewritable CD and closing it immediately after creation so that no further writes are possible, it is virtually immune to infection. That is not to say that this setup is impervious to attack. There are still "Man in the Middle" attacks, Domain Spoofing, DNS Cache poisoning, and other vectors outside the user's own environment. To help mitigate these threats, all financial transactions must use an encrypted, authenticated connection, ie; the "HTTPS:" protocol. If the browser warns of a mis-named, expired, or otherwise un-trusted certificate, the connection should be aborted.

Additionally, some web sites host malicious code capable of "drive-by" infections - inserting malware into a user's browser. Even though the infection cannot be stored back to disk, the browser could be affected for the duration of the session. For this reason, sessions initiated from this special Live CD should only be used for conducting banking and similar business, and not for general web-surfing.
Which Live CD for On-line Banking?

The installation CD for Ubuntu Linux for the desktop is by default a fully functional Live CD. It will auto-detect network connections, and includes the Firefox browser. This alone would suffice for the majority of on-line banking, provided the user didn't mind typing the bank's address into the browser each time (since "favorites" can't be saved).

Ubuntu also supports the ability to re-mix the Live CD to remove unused packages, install others, and set preferences before burning a new CD. This feature can be used to overcome deficiencies in the installation CD, for example, if certificates or keys need to be installed in order to access a particular system.
Live CD for Windows

An unfortunate truth is the fact that some banking institutions require the user to connect with Internet Explorer, which means, from a Windows installation. While Microsoft doesn't support any Live CD distributions for general use, there is at least one 3rd party project that does - BartPE.

"Bart's Pre-installed Environment" requires an original Windows installation CD, but with it the user can create a fully functional CD-based Windows distribution, complete with Internet Explorer.
Other Uses for Live CDs

Aside from maintaining a secure installation from which to initiate financial transactions, Live CDs provide a number of other useful capabilities:

* Testing or demonstrating different operating systems
* Recovering damaged or infected hard drives
* Portability - the entire installation can be run from any PC with a bootable CD-ROM drive

A Live CD is Part of Fight Against Malware

Adopting a Live CD for on-line banking is a major weapon in the battle against crimeware, but it isn't a replacement for good security practices.

Firewalls and virus-scanners still need to be kept up to date. Security-conscious users need to stay informed about the continuing evolution of malicious software, as well as the the techniques to keep their systems, data, and finances safe.


Anti-virus software and two-factor authentication help, but can't guarantee safe on-line banking. A Live CD distribution is practically immune to trojans and viruses.



Virtualization Solutions and Software
A Look at Server Consolidation in Managing Resources

by: Fleur Hupston

Virtualization is a buzz word that has taken the IT industry by storm. Many vendors refer to the phenomenon as “cloud computing”, “server consolidation” and other such terms, but what is it and how can it benefit a company?
Server Virtualization Explained

Simply put, server virtualization means putting many virtual computers on to one physical computer.

The software that accomplishes this “fools” the operating system of the virtual computer that it is running on physical hardware. Virtualization software manages the access each virtual server has to the physical hardware without the virtual computer being aware of it.

Server virtualization can represent a huge cost saving to the company since instead of having, for example, five physical servers running in a server room, there is now one physical server “hosting” five virtual servers.

Typically, a server will not use all its resources one hundred percent of the time. For example, a server might have five gigabytes of RAM, but the operating system running on the physical hardware will normally never use all of that RAM all of the time which means most of the resources available to the operating system will go to waste most of the time.

The memory requirement of the operating system will spike every now and again, requiring close to the five gigabytes of physical memory installed, so the physical memory can't be taken out. But what if another operating system or systems could use the memory when idle? This can be accomplished by server virtualization.

The virtualization software will manage each virtual server's access to the physical hardware so that little or no degradation in performance will be noticed.

Granted, the physical server will still need to be robust enough to handle the requirements of all the virtual servers it looks after, but one big server will work out a lot cheaper than many smaller ones whose resources are wasted most of the time.

In addition, one big server running at full capacity will use far less energy that many smaller servers. While a large server room may have consisted of a thousand physical servers before virtualization, after wards it might consist of one hundred physical servers.

More Benefits of Server Virtualization

While the physical hardware may differ between servers, the virtual hardware that the operating system runs on does not. This means that a virtual server can be moved from one physical server to another completely different physical server without having to load new drivers for new hardware, since the virtual hardware will be the same.

In fact, a virtual server can be moved from one physical server to another without the virtual server being powered down, or the users accessing the virtual server being aware of the move. This being the case, an administrator can “move” virtual servers around on physical servers to suit the work load and requirement of each server.

For example, a web server used for booking tickets for seasonal events might be accessed more at certain times of the year than other times and thus need more physical resources during those times of high usage. The administrator could move the web server to a physical server with more resources for a few months of the year as required.

Some virtualization solutions offer a fail safe solution between two or more physical platforms. In other words, virtual servers can be “cloned”, in real time, to another physical server on the same network. Should the primary physical server crash, the virtualization software on the secondary physical server will immediately take over. The seamless transition to the secondary physical server will go unnoticed to users accessing these virtual servers
Who Provides Virtualization Solutions?

Vmware is probably the leader in virtualization. Vmware have a range of virtualization solutions including some which are free that run on both Microsoft and Linux platforms. The Vmware flagship product will apparently run on a “stripped down” Linux kernel, thus eliminating a resource hungry base operating system.

There are other vendors that offer virtualization solutions, Microsoft and Linux to mention just two.

Redhat (a flavour of Linux) claims that the newly released Red Hat Enterprise Linux 5.4 update is the first product to deliver commercial quality open source virtualization featuring Kernel-based Virtual Machine (KVM) hypervisor technology.

Whomever the vendor, one thing is for sure, virtualization is an upcoming trend.


Many companies are trying to cut down the number of physical servers in a server room by means of virtualization. The research is important in understanding all the risks associated with a computer environment. For example, an in depth understanding of how to enhance computer network security is a good place to start in understanding these risks.

Read more: http://computersoftware.suite101.com/article.cfm/virtualization_solutions_and_software#ixzz0TEw89RW0


Writing in Basic on a Sinclair ZX81 Emulator
Emulating and Programming a 1980's Computer with 2009 Technology

by: Mark Alexander Bain

Anyone wishing to experience the joys of programming in the 1980's can, if they wish, got the Ebay web site and start bidding on a Sinclair ZX81 (if they are lucky enough to find one listed there). However, there is another way for them to achieve that taste of nostalgia.

They can download a Sinclair ZX81 emulator. That way they don't have to wait for the auction to finish, they don't have to wait for the computer to be delivered through the post, and it won't cost them anything either.
Downloading and Running a Sinclair ZX81 Emulator

EightyOne is an excellent Sinclair ZX81 emulator and can be downloaded in a zip file. Once the zip file has been unpacked then the emulator can be run without the need to install it (a shown in figure 1 at the bottom of this article). Then the user will find that, as well as the ZX81, it also emulates the:

* ZX80
* Spectrum (16, 48 and 128K)
* Amstad Spectrum +2 and +3
* Timex TS range

It even allows the the user to run the ZX81 with or without its 16K RAM pack. And more than that, for the true ZX81 aficionado, it will emulate the infamous RAM pack wobble.

Typing on the Sinclair ZX81

Before starting programming it's worth remembering that the ZX81's keyboard was a QWERTY one but, apart from that, it was nothing like a modern pc's layout.

However, the ZX81 keyboard can viewed by pressing the F1 key (as shown in figure 2). It's also worth remembering that most of the keys were actually short cuts and so, for example, pressing "p" causes "PRINT" to be output.
Getting Started with Basic on the Sinclair ZX81

Using ZX81 Basic is quite simple, for example variables can be defined:

And then mathematical operations can be carried out with them, for example X to the power of Y can now be calculated:

Here "**" is SHIFT-H and not two SHIFT-8's.
Programming on the Sinclair ZX81

Programs are written on the ZX81 by preceding each line with a number, for example:
10 LET PI=3.1415265358979
20 LET R=2
30 LET C=2*PI*R

It's run by typing "R" (which, of course, will display "RUN") and then pressing the return key. The result (12.566106) will be displayed on the screen, and then the code listing can be returned to by pressing the return key again (as shown in figure 3).
Saving and Loading a Program

The Listing can be saved by pressing the S key and then typing a name, for example:

It can then be reloaded at any time by pressing the "J" key (for the "LOAD" command) and entering the name again:

At this point it is a good idea to make sure that the computer speakers are turned down because as well as coding in Basic the programmer will be taken back to the 1980's by the distinctive squeal of the data being loaded from a cassette recorder.


The Sinclair ZX81 Emulator was state of the art in the early 1980's but how does it stand up to the passage of time? The easy way to find out is to use an emulator. They will also be reminded of just how far we've come in the time that it takes to load a program. After a minute or two of high pitched tones and a flashing screen those 4 lines will be on the screen again, and the programmer may think, just may be, that they are actually quite happy with modern technology.

Read more: http://computerprogramming.suite101.com/article.cfm/writing_in_basic_on_a_sinclair_zx81_emulator#ixzz0TExgr9Fy

Back to top Go down
View user profile
charmaine anne quadizar

charmaine anne quadizar

Posts : 33
Points : 40
Join date : 2009-06-23
Age : 31
Location : Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyFri Oct 09, 2009 1:47 am

Design and Implementation of A Load-balanced Web Server Cluster

Author: Hu Lijun
Organization: Network Education College of Beijing University of Posts and Telecommunications

Dated: 2009-08-31

Resent years, because of overload, visiting a Single Web server need to wait more time. In order to solve this problem, the study describes the waiting time of visit the Web site, the load balancing of Web server clusters are widely used. This paper discussed the ways to build Web server load balancing cluster based on open-source Linux system, load balancing algorithms and strategies to optimize the performance and so on.

This study is a very big help for everyone who uses the web. The objective of the study is somewhat related to our objective in our research which is the PHP file compressor. The difference is that we just compress the php file in order to minimize the bandwidth transmission and to optimize the performance during deployment while this study still aims to optimize the performance but it discussed the ways to build load balancing cluster in order to attain its objective. Nowadays almost people are using the net all over the world and because of that there are problems were experience by the user/ developers. Visiting web server need to wait for a long time and this study will be a big solution for this problem.

Simple Analytical and Graphical Methods for
Carrier-Based PWM-VSI Drives

Ahmet M. Hava, Student Member, IEEE, Russel J. Kerkman, Fellow, IEEE, and Thomas A. Lipo, Fellow, IEEE

This paper provides analytical and graphical methods for the study, performance evaluation, and design of the modern carrier-based pulsewidth modulators (PWM’s), which are widely employed in PWM voltage-source inverter (VSI) drives. Simple techniques for generating the modulation waves of the high-performance PWM methods are described. The two most important modulator characteristics—the current ripple and the switching losses—are analytically modeled. The graphical illustration of these often complex multivariable functions accelerate the learning process and help one understand the microscopic (per-carrier cycle) and macroscopic (per fundamental cycle) behavior of all the modern PWM methods. The analytical formulas and graphics are valuable educational tools. They also aid the design and implementation of the high-performance PWM methods.

Based on my previous readings I observe that a research truly solve a certain problem. At the end of the study you will see that the objectives presented were all attained at the end of the study. This study is one of it. The paper really helps the PWM learning and design experience becomes simple and intuitive.
In this study simple and powerful analytical and graphical carrier-based PWM tools have been developed. These tools were utilized to illustrate and compare the performance characteristics of various PWM methods. The switching loss and waveform quality comparisons indicate SVPWM at low modulation and DPWM methods at the high-modulation range have superior performance. The tools and graphics aid the modulator selection and PWM inverter design process. The magnitude test is an elegant method for generating the modulation waveforms fast and accurately by digital hardware/software or analog hardware. The analytical methods are also helpful in generating graphics of the microscopic current ripple characteristics and illustrating the performance characteristics and the difference between various modulators. Therefore, they aid visual

Performance and Reliability of DSRC Vehicular Safety
Communication: A Formal Analysis

Hindawi Publishing Corporation
EURASIP Journal onWireless Communications and Networking
Volume 2009, Article ID 969164, 13 pages

Transportation safety is one of the most important intelligent transportation system (ITS) applications. Active safety applications, that use autonomous vehicle sensors such as radar, lidar, and camera are being developed and deployed in vehicles by automakers to address the vehicle accident problem. Communications systems are expected to play a pivotal role in the ITS safety applications. Message communication in the ITS is normally achieved by installing a radio transceiver in each vehicle allowing wireless communications. In this paper, it was first introduce and justify an effective solution to the design of the control channel in DSRC with two levels of safety services covering most of the possible safety applications. Then, construct an analytical model based on Markov chain method in to evaluate performance and reliability indices such as channel throughput, transmission delay, and packet reception rates of a typical network solution for DSRC-based safety-related communication under highway wireless communication environment. They apply the proposed model to evaluate the impact of message arrival interval, channel access priority scheme, hidden terminal problem, fading transmission channel, and highly mobile vehicles on the performance and reliability. Based on the observations of numerical results under typical DSRC environment, some enhancement schemes are suggested or validated accordingly.

Vehicular accident is very rampant nowadays. Some are because of reckless driving, out of control and etc. And these victims are sometimes rescued very late. But because of this study I was able to have an idea that there are active safety applications to address the vehicle accident problem. In this paper, the researcher investigate reliability and performance of DSRC ad hoc V2V communication networks with two levels of safety-related services analytically and by simulation. Several important performance indices for broadcast such as channel throughput, packet reception rates, and packet delivery delay are derived from the proposed analytical model taking IEEE 802.11 backoff counter process, fading channel, hidden terminal, nonsaturation traffic, mobility, and so forth, into account. Numerical results reveal characteristics of the DSRC communication system for safety application.
From the analysis of DSRC safety services on highway, they observe that under typical DSRC environment, IEEE 802.11a is able to meet the safety message delay requirement, but is not able to guarantee high reliability because of possible transmission collision and harsh channel fading; hidden terminal problem in broadcast is more severe than that in unicast; high mobility of vehicles has minor impact on the reliability and performance of the direct single hop broadcast network with high data rate; with direct broadcast and preemptive emergent message transmission, it is possible to meet both performance requirement and reliability requirement simultaneously through adjusting backoff window size, appropriate number of packet repetitions, and enough range of carrier sensing. The research will focus on development and analysis of new effective and robust MAC protocols toward 802.11p, which includes adaptively adjusted network parameters in terms of current traffic load and network situation for optimized performance and reliability.
Back to top Go down
View user profile http://www.charmisme.blogspot.com
ace sandoval

ace sandoval

Posts : 18
Points : 30
Join date : 2009-06-23
Age : 31
Location : Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyTue Oct 13, 2009 2:24 pm

Wait-free Programming for General Purpose Computations on Graphics Processors

This paper aims at bridging the gap between the lack of synchronization mechanisms in recent GPU architectures and the need of synchronization mechanisms in parallel applications. Based on the intrinsic features of recent GPU architectures, the researchers construct strong synchronization objects like wait-free and t-resilient read-modify-write objects for a general model of recent GPU architectures without strong hardware synchronization primitives like test-andset and compare-and-swap. Accesses to the wait-free objects have time complexity O(N), whether N is the number of processes. The fact that graphics processors (GPUs) are today's most powerful computational hardware for the dollar has motivated researchers to utilize the ubiquitous and powerful GPUs for general-purpose computing. Recent GPUs feature the single-program multiple-data (SPMD) multicore architecture instead of the single-instruction multiple-data (SIMD). However, unlike CPUs, GPUs devote their transistors mainly to data processing rather than data caching and flow control, and consequently most of the powerful GPUs with many cores do not support any synchronization mechanisms between their cores. This prevents GPUs from being deployed more widely for general-purpose computing.

The result demonstrates that it is possible to construct wait-free synchronization mechanisms for graphics processors (GPUs) without the need of strong synchronization primitives in hardware and that wait-free programming is possible for graphics processors (GPUs). Most of the paper content was algorithms of process.This paper was more on the algorithms, at first look it is really complicated, but it is well explained by the figures and formula on how they come up with it to have the desired results. I also noticed that they used statements like if, if else statement and also for loop.

Map-Reduce for Machine Learning on Multicore

We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

This paper uses graph, formula and statistical models that are easy to understand. The paper shows a good theoretical computational complexity results. The paper focuses on developing a general and exact technique for parallel programming of a large class of machine learning algorithms for multi-core processors. The Abstract was brief and precise. The paper follows the standard format.

Information Sharing in a Supply Chain

Advances in information system technology have had a huge impact on the evolution of supply chain management. As a result of such technological advances, supply chain partners can now work in tight coordination to optimize the chain-wide performance, and the realized return may be shared among the partners. A basic enabler for tight coordination is information sharing, which has been greatly facilitated by the advances in information technology. This paper describes the types of information shared: inventory, sales, demand forecast, order status, and production schedule. We discuss how and why this information is shared using industry examples and relating them to academic research. We also discuss three alternative system models of information sharing – the Information Transfer model, the Third Party Model, and the Information Hub Model.

This paper was all about information sharing. The Abstract was brief and precise. The paper did not follow the standard format that I know, it has its own format to express and explain well the model, types, and constraints of information sharing. The paper is organized as follows. Section1 was the Introduction Section 2 describes the types of information shared and the associated benefits. Section 3 discusses alternative system models to facilitate information sharing. Section 4 addresses the challenges of information sharing. Regarding the presentation of the paper it is not well arranged, the survey results were on the last part of the paper. While the references was on the upper part. The paper uses many examples to illustrate each model of information sharing and types of shared information.
Back to top Go down
View user profile
Esalle Joy Jabines

Esalle Joy Jabines

Posts : 16
Points : 16
Join date : 2009-06-23

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyTue Oct 13, 2009 5:09 pm

An Alternative Information Web for Visually Impaired Users
in Developing Countries

Nitendra Rajput, Sheetal Agarwal, Arun Kumar, Amit Anil Nanavati

IBM Research Division
IBM India Research Lab
4, Block C, ISID Campus, Vasant Kunj
New Delhi - 110070, India.

This paper presents an alternate platform — the World Wide Telecom Web (WWTW), for delivering information and services to the visually impaired. WWTW is a network of VoiceSites that can be created and accessed by a voice interaction over an ordinary phone. The researchers presented user studies which demonstrate that the learning curve for using applications on the Telecom Web is relatively low and does not require extensive training. With this study, the researchers believe that the Telecom Web can be the mainstream Web for blind users. Websites in World Wide Web are primarily meant for visual consumption. Accessibility tools such as screen readers that render the visual content in audio format enable the visually impaired to access information on the websites. Despite standards that are available to make websites more amenable for screen reading softwares, not many website authors embed the required metadata information that feeds into such tools. Moreover, the wide variety of visual controls available makes it harder to interpret the websites with screen readers. This problem of accessing information and services on the web escalates even further for visually impaired in developing regions since they are either semi- literate/illiterate or cannot afford computers and high-end phones with screen reading capability. This problem is addressed by this study.

Nowadays, for a common person, access to information is a key requirement. Over the last decade, the World Wide Web has grown tremendously to become the largest source of information. It is also being used by governments and enterprises to provide services to their citizens and customers. It is being used by most people today. With this, how about those people who are visually impaired? Because of increasing efforts of addressing these problems, there are several existing efforts at making the content on the Web accessible to visually impaired users. These include software tools such as screen readers, web accessibility standards and government laws to make websites accessible. In this connection, the paper presented the Telecom Web as an alternative to World Wide Web for delivering information services to visually impaired people. Telecom Web provides a low-cost, completely accessible platform, especially for people in developing countries. The researchers performed usability study with a sample VoiceSite and derived interesting insights. I found this paper interesting since it also presented several potential applications that can be delivered to the blind population through the Telecom Web. The paper was also made interesting since the researchers also included illustrations on their study especially with the results making the paper more comprehensible by readers although some of the terms were not properly explained.

Towards Trustworthy Kiosk Computing

Scott Garris
Carnegie Mellon University
Pittsburg, PA

Ramon Caceres, Stefan Berger, Reiner Sailer, Leendert van Doorn, Xiaolan Zhang
IBM Research Division
Thomas J. Watson Research Center
P.O. Box 704
Yorktown Heights, NY 10598

This paper presents a system in which a user, by leveraging the capabilities of a personal mobile device such as smartphone, gains a degree of trust in kiosk prior to using it. Trust is the expectation that a computer system will faithfully perform its intended purpose. The researchers refer to a kiosk as trustworthy if they can verify the identity and integrity of the software running on that kiosk. Public computing kiosks, such as an airline check-I terminal at an airport or a rental computer at an Internet café, have become common place. A problem with current kiosks is that the user must assume that a kiosk is performing only its intended function, or more specifically, that it has not been compromised by an attacker. A compromised kiosk could harm the user by, e.g., stealing private data. Similarly, the owner of a kiosk wants to ensure that the kiosk is not used to perform malicious acts for which he may be liable. The paper presented a system in which a user controls a personal mobile device to establish trust on a public computing device, or kiosk, prior to revealing personal information to that kiosk. The researchers designed and implemented a protocol by which the mobile device determines the identity and integrity of the software running on the kiosk. A similar protocol simultaneously allows a kiosk owner to verify that the kiosk is running only approved software.

The researchers have made this paper concise and direct to the point. This made the paper understandable even though the word kiosk was not properly defined. I think if a person who is not that exposed to the computer terms may not really know what this paper meant. I think more emphasis on the topic can be a solution to this. On the other hand, the researchers presented very well the system design of the system and prototype implementation. The researchers also used an illustration on the Kiosk Computing Scenario. Based on what I have read on this paper, trust and integrity is the main issue.

A New Schema for Security in Dynamic Uncertain Environments

Dakshi Agrawal
IBM T.J. Watson Research Center

The hypothesis presented in this paper is that for a complex system of systems operating in a dynamic, uncertain environment the traditional approach of forward, static security is insufficient. What is required are macroscopic schemata for security that incorporate mechanisms which monitor the overall environment and feed their observations back into the security mechanisms so that they can adjust their ‘posture’ accordingly. Such schemata must also account for system-wide aggregated security risks in addition to risk presented by the individual users and information objects. With this, the researcher proposed one such schema in this work. It is the uncertainty and dynamicity in the operating environment that ask the most penetrating questions from the current security solutions. In the early days, computers were largely isolated from each other, had limited software functionality, and their users were technically sophisticated, resulting in an environment that was well-controlled. The properties of a security mechanism could be proved under a ‘clean room’ security – model that was not too far from the reality. However, computers have since then transformed into computing devices of all shapes and sizes; the functionality has grown exponentially and the user base has expanded to include technical equivalent of laity. While current state of the art in computer security has addressed many challenges rising from these changes, it has failed to systematically address the most basic change; namely, there is a lot more uncertainty and dynamism in the operating environment and the context of computing systems today than it was a few decades ago.

The paper explained the new schema proposed by the researcher. Two examples of a security schema for access control that addresses the problems being mentioned in the paper was presented by the researcher. With those examples mentioned, the researcher investigated on how the proposed new schema can be used to provide interesting insights into the design of access-control systems. The paper, in connection with the examples given, also showed illustrations which relates to the examples.
Back to top Go down
View user profile
Jonel Amora

Jonel Amora

Posts : 53
Points : 61
Join date : 2009-06-23
Age : 29
Location : Davao City

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) EmptyWed Oct 14, 2009 8:40 am

The Effects of Conversations with Regulars and Administrators on the
Participation of New Users in a Virtual Learning Community

Yevgeniy “Eugene” Medynskiy, Amy Bruckman
College of Computing
Georgia Institute of Technology

Recent interest in synchronous collaborative learning environments prompts an examination of users’ participation, social roles, and social interactions in these spaces. We analyze new users’ participation rates on MOOSE Crossing, a collaborative educational environment that has been operating for over ten years. We examine how interactions with MOOSE Crossing regulars – highly active users who set the tone for the community – and its administrators, may influence the participation of new users. New users who conversed with regulars or administrators soon after joining are found to exhibit more social activity and stay involved with MOOSE Crossing longer than new users who did not. Regulars are apparently better at eliciting participation than administrators, but a synergistic effect is also detected – new users who interacted with both administrators and regulars exhibit especially high rates of participation.


Pressing: Smooth Isosurfaces with Flats from Binary Grids
A. Chica, J. Williams2, C. Andujar, P. Brunet, I. Navazo, J. Rossignac2, A. Vinacua
Department of Software
Polytechnic University of Catalonia, Barcelona, Spain

We explore the automatic recovery of solids from their binary volumetric discretizations. In particular, we propose an approach, called Pressing, for smoothing isosurfaces extracted from binary volumes while recovering their large planar regions (flats). Pressing yields a surface that is guaranteed to contain the samples of the volume classified as interior and exclude those classified as exterior. It uses global optimization to identify flats and constrained bilaplacian smoothing to eliminate sharp features and high-frequencies from the rest of the isosurface. It recovers sharp edges between flat regions and between flat and smooth regions. Hence, the resulting isosurface is usually a very accurate approximation of the original solid. Furthermore, the segmentation of the isosurface into flat and curved faces and the sharp/smooth labelling of their edges may be valuable for shape recognition, simplification, compression, and various reverse engineering and manufacturing applications.


Optimized Blist Form (OBF)
Jarek Rossignac
GVU Center & School of Interactive Computing
College of Computing, Georgia Institute of Technology, Atlanta, Georgia, USA

Any Boolean expressions may be converted into positive-form, which has only union and intersection operators. Let E be a positive-form expression with n literals. Assume that the truth-values of the literals are read one at a time. The numbers s(n) of steps (operations) and b(n) of working memory bits (footprint) needed to evaluate E depend on E and on the evaluation technique. A recursive evaluation performs s(n)=n–1 steps but requires b(n)=log(n)+1 bits. Evaluating the disjunctive form of E uses only b(n)=2 bits, but may lead to an exponential growth of s(n). We propose a new Optimized Blist Form (OBF) that requires only s(n)=n steps and b(n)=[log2j] bits, where j=[log2(2n/3+2)]. We provide a simple and linear cost algorithm for converting positive-form expressions to their OBF. We discuss three applications: (1) Direct CSG rendering, where a candidate surfel stored at a pixel is classified against an arbitrarily complex Boolean expression using a footprint of only 6 stencil bits; (2) the new Logic Matrix (LM), which evaluates any positive form logical expression of n literals in a single cycle and uses a matrix of at most n×j wire/line connections; and (3) the new Logic Pipe (LP), which uses n gates that are connected by a pipe of [log2j] lines and when receiving a staggered stream of input vectors produces a value of a logical expression at each cycle.

Back to top Go down
View user profile http://jdamora.blogspot.com/
Sponsored content

Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty
PostSubject: Re: Assignment 7 (Due: September 14, 2009, 13:00hrs)   Assignment 7 (Due: September 14, 2009, 13:00hrs) Empty

Back to top Go down
Assignment 7 (Due: September 14, 2009, 13:00hrs)
Back to top 
Page 1 of 1
 Similar topics
» A Boutique and an assignment against me.
» UPDATED[MediaFire]3 Idiots • 2009 • 550MB • X264 DVDRiP • English SubTitles •
» Revised Guidelines on Index-Based Pricing for Procurement of Petroleum, Oil, and Lubricant Products
» Eligibility to Submit bids ( 2nd bidding ) after disqualified due to failure to comply with all the bid requirements in the 1st bidding..

Permissions in this forum:You cannot reply to topics in this forum
USEP-IC  :: Methods of Research-
Jump to: