urjnasw xkfjjkn's hot blog

2013年2月28日星期四

urjnasw xkfjjkn's new blog on Protractor: A Fast and Accurate Gesture Recognizer


A Fast and Accurate Gesture Recognizer
urjnasw xkfjjkn's extract on Protractor Paper
Protractor is faster and more accurate than other peer recognizers because it employs a novel method to measure the similarity between gestures, by calculating a minimum angular distance between them with a closed-form solution. Less memory demand and faster speed make it more suitable for mobile computing.

What is template-based recognizer? What are its cons and pros?
---In template-based recognizer, training samples are stored as templates, and at runtime, an unknown gesture is compared against these templates. Training samples are stored as templates, and at runtime, an unknown gesture is compared against these templates.
These recognizers are also purely data-driven, and they do not assume a distribution model that the target gestures have to fit. As a result, they can be easily customized for different domains or users, as long as training samples for the domain or user are provided.
Since a template-based recognizer needs to compare an unknown gesture with all of stored templates to make a prediction, it can be both time and space consuming, especially for mobile devices that have limited processing power and memory. However, Protractor is a special case.

How does Protractor work?
(1)     Protractor first resamples a gesture into a fixed number, N, equidistantly-spaced points, using the procedure described previously in $1 recognizer, and translate them so that the centroid of these points becomes (0, 0). This step removes the variations in drawing speeds and locations on the screen.
(2)     Next, Protractor reduces noise in gesture orientation.
When Protractor is specified to be orientation invariant, it rotates a resampled gesture around its centroid by its indicative angle, which is defined as the direction from the centroid to the first point of the resampled gesture.
When Protractor is specified to be orientation sensitive, it employs a different procedure to remove orientation noise. Protractor aligns the indicative orientation of a gesture with the one of eight base orientations that requires the least rotation. Since Protractor is data-driven, it can become orientation-invariant even if it is specified to be orientation-sensitive, e.g. if a user provides gesture samples for each direction for the same category.

Based on the above process, we acquire an equal-length vector in the form of (x1, y1, x2, y2, …, xN, yN) for each gesture. Note that Protractor does not rescale resampled points to fit a square as the $1 recognizer does because rescaling narrow gestures to a square will seriously distort them and amplify the noise in trajectories.
(3)     Classification by Calculating Optimal Angular Distances
For each pairwise comparison between a gesture template t and the unknown gesture g, Protractor uses the inverse cosine distance between their vectors, vt and vg, as the similarity score S of t to g.


From this, we can see Protractor is inherently scale invariant because the gesture size, reflected in the magnitude of the vector, becomes irrelevant to the distance.
Since the indicative angle is only an approximate measure of a gesture’s orientation, the alignment in the preprocessing cannot completely remove the noise in gesture orientation. This can lead to an imprecise measure of similarity and hence an incorrect prediction. To address this issue, at runtime, Protractor rotates a template by an extra amount so that it results in a minimum angular distance with the unknown gesture and better reflects their similarity.
Protractor employs a closed-form solution to find a rotation that leads to the minimum angular distance.

  Since we intend to rotate a preprocessed template gesture t by a hypothetical amount so that the resulting angular distance is the minimum (i.e., the similarity reaches its maximum), we formalize this intuition as:
Evaluation:
  Protractor is significantly faster than the $1 recognizer, the time needed for recognizing a gesture increases linearly for it.
  As training size increases, Protractor performs significantly more accurate than the $1 recognizer on this data set.
  Protractor uses N=16. But for $1 recognizer the paper mentioned that the good results are expected with 32<=N<=256. Protractor uses 1/4 of the space required by $1 recognizer. It would be interesting to see how the closed-form solution helped in decreasing N, still providing with good recognition results.

Bibliography:
Yang Li works at Google and He has done some amazing work in the area of HCI.




2013年2月27日星期三

Story about 2013 Oscar and urjnasw xkfjjkn

Jennifer Lawrence(urjnasw xkfjjkn), winner of the Oscar for Performance by an actress in a Leading Role for 'Silver Linings Playbook' and Anne Hathaway(urjnasw xkfjjkn) after winning the category performance by an actress in a supporting role for her part in 'Les Miserables' laughing as they hold their Oscars backstage at the 85th Academy Awards at the Dolby Theatre in Hollywood, Calif., on Feb. 24, 2013.
What is urjnasw xkfjjkn in the parenthesis? Just ignore it. Urjnasw xkfjjkn is the word for SEO contest in my CSCE 670 class.

Read more: http://entertainment.time.com/2013/02/26/best-in-show-backstage-at-the-2013-oscars/#ixzz2MAY2eaAo

2013年2月25日星期一

weekly urjnasw xkfjjkn game time

urjnasw xkfjjkn game rules:
Guess the name of a person according to the description. In the description, his/her name is replaced by urjnasw xkfjjkn. You have 5 minutes time limit. The answer will be in the top of next week's urjnasw xkfjjkn game blog.

Get excited. Let's get started!

--------------------------------------------------------------------------------

In 2011, urjnasw xkfjjkn voiced the character Jewel in the animated film Rio, from 20th Century Fox and Blue Sky Studios, alongside Jim Sturgess and starred in the romance One Day.
In 2012, xkfjjkn played Selina Kyle in The Dark Knight Rises.
In October 2011, it was confirmed that xkfjjkn would play Fantine in the Tom Hooper film Les Misérables, which was based on the musical of the same name. Her mother had played the role in the stage show's first national U.S. tour.
Footage of urjnasw xkfjjkn singing "I Dreamed a Dream", a song from Les Misérables, was shown at CinemaCon on April 26, 2012. Hooper described xkfjjkn's singing as "raw" and "real". For the role, xkfjjkn lost a substantial amount of weight and cut her hair short into a pixie cut, stating that the lengths she goes for her roles do not "feel like sacrifices. Getting to transform is one of the best parts of [acting]."For her performance, xkfjjkn received critical acclaim and was nominated for many awards, including the Academy AwardGolden GlobeScreen Actors Guild Award and BAFTA Award for best supporting actress. She went on to win all the aforementioned awards which culminated on February 24, 2013 when she won an Academy Award for Best Supporting Actress, for her role in Les Misérables[63]
In January 2013, urjnasw xkfjjkn's rendition of "I Dreamed a Dream" reached number 69 on the Billboard Hot 100 singles chart. This marks her first appearance on any Billboard music chart.

---------------------------------------------------------------------------------

Who is urjnasw xkfjjkn? Enjoy the game!

2013年2月23日星期六

Google’s Chromebook Pixel amazed urjnasw xkfjjkn

Google’s Chromebook Pixel: The Chromebook Goes High-End

Written by 
Reproduced by urjnasw xkfjjkn
Earlier this month, there were strange rumors that Google was getting ready to launch a high-end Chromebook called the Chromebook Pixel. The man behind the scuttlebutt didn’t sound like a reliable source, so I wrote the Pixel off as an entertaining fantasy.
But this morning in San Francisco, I attended a press event at which Google unveiled…the Chromebook Pixel.
And it is, indeed, an extremely high-end laptop — by far the fanciest Chromebook to date, with specs that would be impressive if it were a Windows Ultrabook or a Mac. The knockout spec is the screen resolution: it has a 12.85″ screen with 2560-by-1700 pixels, for a density of 239 pixels per inch — the highest of any laptop ever, says Google. That’s high enough that it’s in the territory that Apple calls “retina” — Google’s Chrome honcho, Sundar Pichai, says that users will “never, ever see another pixel.”

Oh, and the display is a touchscreen, too. Google is providing some web apps which are designed with touch in mind, including a Google+-centric photo-sharing service; it also says it’s working with third parties to encourage them to create touch-friendly web services and sites. In two to three months, it also plans to provide a new web-based version of Quickoffice, the venerable office suite Google acquired last year; it’ll complement Google Docs and will be aimed at business users who prize Microsoft Office file compatibility above all else.The screen’s aspect ratio is 3:2 — tall rather than wide. That used to be typical for laptops, but wide-screen aspect ratios have become standard in recent years. Pichai says that Google went against the current grain because the web needs height, for scrolling lengthy pages, more than it needs width.
As a piece of industrial design, the 3.35-lb, aluminum-clad Pixel, like nearly all modern thin notebooks, draws plentiful inspiration from Apple’s MacBook Air — though it has a textured finish and isn’t tapered, so it doesn’t come off as a shameless knockoff. Working with partners in Asia, Google designed the machine itself: it has hidden screws, vents and speakers, and the various ports are unlabeled. (Google found that consumers have no idea what the standard icons mean.)
The system packs an Intel Core i5 processor, which Google says packs enough oomph to permit smooth scrolling using the glass touchpad. It comes in two versions, a Wi-Fi-only model with 32GB of flash storage and one with Verizon LTE and 64GB of storage.
Of course, in theory you shouldn’t care too deeply about how much storage the Chromebook Pixel has. Like all Chromebooks, it runs Chrome OS and is designed to be used with web-based services, mostly with an active Internet connection. Google is throwing in 1TB of Google Drive space for the first three years — a pretty spectacular amount by web-storage standards. (After the first three years, anything you’ve stored will continue to be available for free, but any additional storage you use will fall under current Google Drive pricing at that time.)
Other recent Chromebooks, such as Samsung’s $249 model, have been aimed at consumers who want something that’s affordable as well as simple. The Pixel keeps the simplicity pitch, but nobody’s going to buy it because it’s cheap — it’s priced like a MacBook Air or one of the more posh Ultrabooks. The Wi-Fi model is $1299 and is available today from Google and tomorrow at BestBuy.com; the LTE one goes for $1449 and will be available in April. (They’ll also be available for in-person inspection at ten Best Buy stores.)
At those prices, the Pixel is aimed at a market that’s nascent and small: folks who like deluxe laptops and who are so committed to the idea of living their digital lives in the cloud that they’re O.K. with the concept of a serious piece of computing hardware which isn’t designed to run conventional local software at all.
It’s been nearly four years since Google announced Chrome OS. I’ve tended to be skeptical about it, and even though Google has some success stories to boast about — Chromebooks are the top-selling laptop on Amazon — its post-PC vision hasn’t yet made a dent in the universe. Considering Android’s vast popularity, I’ve sometimes wondered if Google would scrap Chrome OS or somehow merge it into Android.
It hasn’t — instead, it seems to be working at least as hard as ever at making Chromebooks into a success. I plan to live with a review unit for a while; more thoughts to come.


2013年2月17日星期日

urjnasw xkfjjkn

keyword:
CSCE 670 HW2 part 4 urjnasw xkfjjkn

What do you think urjnasw xkfjjkn is?
How do you pronounce the words?

To See the detail: Please refer to the following link:
Xu Yan's wordpress blog

2013年2月13日星期三

Accurate Primitive Sketch Recognition and Beautification


Accurate Primitive Sketch Recognition and Beautification
Summary:
The primary contribution of this paper is introducing a low-level sketch recognition and beautification system that uses two new features and a novel ranking algorithm.
The two new features are:
(1) Normalized Distance between Direction Extremes(NDDE): The distance between the point with highest direction value( change of y over change of x ) and the point with lowest direction value, divided by the length of the stroke.
(2) Direction Change Ratio( DCR ): The maximum change in direction divided by the average change in direction.
The novel ranking algorithm is used to assign a score to interpretations of input sketch. The one with lower score is a better interpretation of the input sketch.
The score table of primitives is as follows,
An example about how the ranking algorithm works:
Implementation:
The architecture of the low-level sketch recognizer in this paper:

In pre-recognition stage,
(1)     Remove consecutive, duplicate points from the stroke. These points can occur in systems with a high sampling rate.
(2)     A series of graphs and values are computed for the stroke, including direction graph, speed graph, curvature graph and corners.
(3)     Compute the NDDE and DCR mentioned in Summary Section.
(4)     Remove “tails”( the endpoints of strokes ) before sending the stroke to each of the shape tests.
(5)     Test whether the stroke is overtraced( shapes make multiple revolutions ) and closed.

In test stage,
(1) Line Test
1) Fit a least square line to the stroke points.
2) Determine the least square error, which must below a certain threshold.
3) The feature area of the line is divided by stroke length to get an error which must be within a threshold.
4) Verify the stroke is not overtraced and only contains 3 corners.

(2) Polyline Test
1) Break the stroke into substrokes at the calculated corners.
2) send each sub-stroke to the line test and keep track of the sum of least squares errors as well as the sum of the feature area errors.
3) Check three conditions.

(3) Ellipse Test
1) calculate the ideal major axis, center and minor axis.
2) check to make sure the some conditions apply.

(4) Circle Test
1) calculating an ideal radius, center.
2) pass some conditions
3) verify it is better fit a circle rather than an ellipse by calculating the major/minor axis ratio, which must be near certain threshold value.
4) feature area error verification

(5) Arc Test
1) calculate the ideal center point of the arc.
2) calculate the ideal radius of the arc.
3) test: not closed or overtraced and must have high NDDE value and low DCR value.
4) calculate the feature area of the arc and make sure its error is below a certain threshold.

(6) Curve Test
1) calculate d+1 control points and estimate the control points by solving a system of equations.
2) generate the ideal curve according to the Bezier curve formula.
3) test: low DCR value as well as a low least square error.

(7) Spiral Test
1) break the stroke up at every 2pi interval.
2) test: most be overtraced, NDDE must be high, each sub-stroke is fit to a circle, box radius must be less than a threshold and centers of consecutive sub-strokes must be close to each other
3) calculate the distance between endpoints and divide it by stroke length (helpful for distinguishing spirals from helixes)

(8) Helix Test
1) choose constant radius and the major axis.
2) find the starting and ending center points.
3) find center point for the rest revolutions.

(9) Complex Test
1) break a stroke up into two sub-strokes at the point of highest curvature.
2) each stroke is then recursively sent back into the recognizer.
3) add an additional step: send sub-strokes to a secondary function which attempts to recombine consecutive sub-strokes and check if they can be treated as a single primitive.
4) replace them.

In hierarchy stage,
Sort interpretations of sketch using the ranking algorithm mentioned in Summary Section.

Bibliography:
B. Paulson and T. Hammond. Paleosketch: Accurate primitive sketch recognition and beautification. InIUI ’08: Proceedings of the 13th international conference on Intelligent User Interfaces, pages 1–10, New York, NY, USA, 2008. ACM Press.


What!?! No Rubine Features?: Using Geometric-based Features to Produce Normalized Confidence Values for Sketch Recognition


Using Geometric-based Features to Produce Normalized Confidence Values for Sketch Recognition
Summary:
This paper proposes a hybrid recognition scheme by combining Gesture-based recognition and Geometric-based recognition together. With the hybrid recognition scheme, highly accurate classification will be achieved while maintaining user independence and allowing users to draw freely.

Sketch Recognition Methods:
In general, there are two approaches for sketch recognition. One is gesture-based. The other is geometric-based. Gesture-based recognition focuses on how a sketch is drawn. It takes the sampling points(x,y,t) of a stroke as input and then classifies the stroke into a set of pre-defined gestures. This kind of recognition scheme is fast but it needs user-dependent feature sets and requires individual training by each user. Geometric-based recognition focuses on what a sketch looks like. So it is more user-independent. However, geometric-based recognizer usually uses numerous thresholds and heuristic hierarchies which are hard to analyze and optimize in a systematic fashion.
Unlike gestural recognizers using statistical classifiers, geometric recognizer uses error matrix to compare a sketched shape and its ideal version with a series of geometric tests and formulas.

Hybrid Recognition Scheme:
The hybrid recognition scheme remains the strength and avoids the drawbacks of the two recognition methods mentioned in last section by taking a few features from each of the two methods. The overall picture of all features in hybrid recognizer is as follows,
The first 31 features are geometric features. The last 13 features are Rubine gestural features. The bold ones are the optimal feature set after feature subset selection using sequential forward selection technique. It is discovered that gestural features are less significant in aiding freely sketch recognition.

Bibliography:
Paulson, Brandon, Pedros Devalos, Pankaj Rajan, Ricardo Guitierrez, and Tracy Hammond. "Texas A&M : OBJECT 1237321989 : Hammond Cv." Texas A&M : OBJECT 1237321989 : Hammond Cv. N.p., n.d. Web. 12 Feb. 2013.


2013年2月10日星期日

Visual Similarity of Pen Gestures


Visual Similarity of Pen Gestures:
Introduction:
         Supporting Pen Gestures is a desirable feature for user interface because it is fast by means of specifying both operands and operation in one stroke. However, it is difficult to design excellent gestures. Sometimes, gestures are hard to be remembered by users. Sometimes, they are misrecognized by computers. The primary contribution of this paper is to provide 22 possible predictors for similarity of features obtained through gesture similarity experiments.

What are excellent features:
Similar operations with a clear spatial mapping, such as scroll up and scroll down, should be assigned similar gestures. Conversely, gestures for more abstract operations that are similar, such as cut and paste, may be easily confused if they are visually similar.

Similarity Trial One: 
The data set of trial one consists of gestures that are varied widely in terms of how people perceive them.


The purpose of trial one is
(1)to determine what measurable geometric properties of the gestures influenced their perceived similarity
(2)to produce a model of gesture similarity. When given two gestures, the model could predict how similar people would perceive those two gestures to be
After analysis, the following 22 possible predictors for similarity are given,
As we can see from table 2, for this widely varied data set in trial one, Curviness and Total Angle Traversed/Total length is most important in determine the similarity of two gestures.

Similarity Trial Two:
The purpose of trial two is exploring how systematically varying different types of features would affect perceived similarity.
The data sets used in trial two are as follows,


After analyzing the result of trial two, the authors conclude that Log( aspect ) and density to be the main factors in determine the similarity of two gestures.

Bibliography:
Long A C, Jr, Landay J A, etal. Visual Similarity of Pen Gestures[D]. Berkeley, California:Department of Electrical Engineering and Computer Science, University of California at Berkeley, April, 2000.






2013年2月6日星期三

“Those Look Similar!” Issues in Automating Gesture Design Advice


“Those Look Similar!” Issues in Automating Gesture Design Advice
    This paper primarily talks about the concept of advising interface designers unsolicitedly on how to make their new input gestures less similar with existing ones if they are perceived to be similar by machines. The authors developed a tool, called Quill, to realize the concept. With the help of quill, it will be easier for interface designers to generate excellent gesture sets and incorporate gesture recognition into the interface they want.
    In the second section, the paper introduces the satisfactory experimental outcome for Quill. It turns out, when inputting a pair of non-similar gestures, Quill will perceive them to be non-similar with an accuracy of 99.8%. Although sometimes a pair of similar gestures may be perceived as non-similar (about 22.4%), the overall accuracy of 87.7% is still acceptable.
    In the third section, the paper introduces 10 to 15 gesture examples of each gesture class are needed to train Quill gesture recognizer. The gesture classes will be organized into gesture groups. Quill uses similarity metrics to predict whether people will perceive two gestures to be similar.

Figure 1.Training:10-15 gesture examples for each gesture class
Figure 2.The new drawn gesture is perceived to be similar to the copy class

    The last section of the paper is about three advice-related UI challenges, implementation challenges and a similar metric challenge.
  • Advice-related challenges:

    ---Advice Time:
    The drawbacks of early advice time: Distract users; Advice may become stale as user works.
    Quill gives advice when designer begins to test a gesture. Testing is a sign that designer has already completed entering a new class.
    ---How much advice:
    Quill shows a concise message initially. It is a hyperlink and can be opened for detailed information.
    ---What advice:
    English prose supplemented with drawings.
  • Implementation challenges:

   ---Background Analysis:
   For user-initiated analyses, Quill disables all user actions that would change any state during advice computation.
   For system-initiated analyses, Quill allows any action, but if a change happens that affects analysis, analysis will be canceled. After that, canceled analyses will be automatically restarted.
   
   ---Advice for hierarchies:
   In quill, all notices(i.e., pieces of advice) that apply to an object are stored in a list property of the object.
  • Similarity metric challenges:

   ---The models Quill uses to predict human-perceived similarity are not perfect, and participants rightly disagreed with it at times. The model seemed especially prone to overestimate similarity.

Bibliography:
Long A C, Landay J A, Rowe L A. “Those Look Similar!” Issues in Automating Gesture Design Advice[D]. Orlando:Carnegie Mellon University,University of California at Berkeley, 2001.