Algorithms

P≠NP - A Definitive Proof by Contradiction

Following the great scholarly acceptance and outstanding academic success of "The Clairvoyant Load Balancing Algorithm for Highly Available Service Oriented Architectures, this year I present P Not Equal to NP - A Definitive Proof by Contradiction.

 

P Not Equal to NP - A Definitive Proof by Contradiction

 

Click here to read the entire paper in PDF. P Not Equal to NP - A Definitive Proof by Contradiction.

Share

Machine Learning - On the Art and Science of Algorithms with Peter Flach

Over a decade ago, Peter Flach of Bristol University wrote a paper on the topic of "On the state of the art in machine learning: A personal review" in which he reviewed several, then recent books, related to developments in machine learning. This included Pat Langley’s Elements of Machine Learning (Morgan Kaufmann), Tom Mitchell’s Machine Learning (McGraw-Hill), and Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations by Ian Witten and Eibe Frank (Morgan Kaufman) among many others. Dr. Flach mentioned Michael Berry and Gordon Linoff’s Data Mining Techniques for Marketing, Sales, and Customer Support (John Wiley) for it's excellent writing style citing the paragraph below and commending "I wish that all computer science textbooks were written like this."

“People often find it hard to understand why the training set and test set are “tainted” once they have been used to build a model. An analogy may help: Imagine yourself back in the 5th grade. The class is taking a spelling test. Suppose that, at the end of the test period, the teacher asks you to estimate your own grade on the quiz by marking the words you got wrong. You will give yourself a very good grade, but your spelling will not improve. If, at the beginning of the period, you thought there should be an ‘e’ at the end of “tomato”, nothing will have happened to change your mind when you grade your paper. No new data has entered the system. You need a test set!

 

 

 

 

Now, imagine that at the end of the test the teacher allows you to look at the papersof several neighbors before grading your own. If they all agree that “tomato” has no final ‘e’, you may decide to mark your own answer wrong. If the teacher gives the same quiz tomorrow, you will do better. But how much better? If you use the papers of the very same neighbors to evaluate your performance tomorrow, you may still be fooling yourself. If they all agree that “potatoes” has no more need of an ‘e’ then “tomato”, and you have changed your own guess to agree with theirs, then you will overestimate your actual grade on the second quiz as well. That is why the evaluation set should be different from the test set.” [3, pp. 76–77] 4

 

Machine-Learning-9781107096394

 

That is why when I recently came across  "Machine Learning The Art and Science of Algorithms that Make Sense of Data", I decided to check it out and wasn't disappointed. Dr. Flach is the Professor of Artificial Intelligence at the University of Bristol and in this "future classic", he left no stone unturned when it comes to clarity and explainability.  The book starts with a machine learning sampler, introduces the ingredients of machine learning fast progressing to Binary classification and Beyond. Written as a textbook, riddled with examples, foot-notes and figures, this text elaborates concept learning, tree models, rule models, linear models, distance-based models, probabilistic models to features and ensembles concluding with Machine learning experiments. I really enjoyed the "Important points to remember" section of the book as a quick refresher on machine-learning-commandments.

The concept learning section seems to have been influenced by author's own research interest and is not discussed in as much details in contemporary machine learning texts. I also found frequent summarization of concepts to be quite helpful. Contrary to it's subtitle and compared to it's counterparts, the book however is light on algorithms and code, possibly on purpose. While it explains the concepts with examples, number of formal algorithms are kept to a minimum. This may aid in clarity and help avoiding recipe-book-syndrome while making it potentially inaccessible to practitioners. Great at basics, the text also falls short on elaboration of intermediate to advance topics such as LDA, kernel methods, PCA, RKHS, and convex optimization. For instance, in chapter 10 "Matrix transformations and decompositions" could have been made an appendix while expanding upon meaningful topics like LSA and use cases of sparse matrix (pg 327). It is definitely not the book's fault; but rather of this reader expecting too much from an introductory text just because author explains everything so well!

As a text book on On the Art and Science of Algorithms, Peter Flach definitely delivers on the promise of clarity, with well chosen illustrations and example based approach. A highly recommended reading for all who would like to understand the principles behind machine learning techniques.

Materials can be downloaded from here which generously include excerpts with background material and literature references, full set of 540 lecture slides in PDF including all figures in the book with LaTeX beamer source of the above.

Share

Demystification of Demystifying Machine Learning using nuML w/ Seth Juarez

Going for a little Benoit B. Mandelbrot recursion joke here with the title.

Seth Juarez (github) recently spoke to Pasadena .NET user group on the topic of Practical Machine Learning using nuML. Seth is a wonderful speaker, educator and nuML is an excellent library to get started with machine learning in .NET. His explanations are very intuitive; even for people who have been working in the field for a while. During the talk and follow up discussions, there were various technical references made which went beyond the scope of talk. To be fair with Seth, he covered lot of material in an hour and a half; probably couple of weeks worth in a traditional ML course.

Therefore I decided to provide links to these underlying topics for the benefit of attendees in case anyone is interested in knowing more about them.

Happy Machine Learning!

Share

The Clairvoyant Load Balancing Algorithm for Highly Available Service Oriented Architectures

Abstract: Load balancing allows network devices to distribute workload across multiple processing resources including server clusters and storage devices. This distribution helps maximize throughput, achieve optimal resource utilization, minimize response time and help use hardware effectively in multiple data-center locations. As a meta-heuristic enhancement to Psychic Routing[1], researchers present early work in a novel algorithm Clairvoyant for optimal yet unrealizable distribution of traffic.
Among many earlier works including [5, 4], the main inspiration of this algorithm is the RFC 1149, i.e. a standard for the Transmission of IP Datagrams on Avian Carriers. Study of literature suggests that earlier work by [7, 2] on internet protocol over xylophone players (IPoXP) also has a huge impact on classical OSI network model. A typical application load balancing is based on techniques including round robin, weighted round robin, least connections, shortest response, SNMP, weighted balance, priority, overflow, persistence, least used, lowest latency, and enforced traffic flow [6]. Researchers propose that Clairvoyant, by utilizing the ensemble of anomalous cognition, ESP, remote viewing and psychometry, can provide a high performance yet irreproducible load balancing approach. The Clairvoyant load balancing algorithm helps the system administrator fine-tune how traffic is distributed across connections in a psychic manner. Backed by parapsychological research[1], each load balancer is equipped with an enterprise grade channeling medium with features to fulfill potential special deployment requirements. Building upon the techniques proposed in RFC 5984, using extrasensory perception to achieve "infinite bandwidth" in IP networks, Clairvoyant can achieve negative latency as well as negative transmission time difference with appropriate parameters, unachievable by traditional methods[6, 3]. The algorithm uses claircognizance to redirect traffic to one of the unused or even non existent nodes. Clairaudience allows setting up the connection priority order, however early experiments suggest that using 0x8 spherical surfaces also achieve the same level of performance when compared using ROC/AUC.
Although irreproducible in most non-REM environments, the researchers see the potential of using this load balancing algorithm in most high performing service oriented architectures allowing the packet forwarding that will provide unsurpassed end user performance regardless of link capacity, distance, and number of hops. Detailed algorithm and findings will be published in The Journal of Irreproducible Results by 4/1/2014.

References

[1] Jonathan Anderson, Frank Stajano. Psychic Routing: Upper Bounds on Routing in Private DTNs. , 2011.

[2] R Stuart Geiger, Yoon Jung Jeong, Emily Manders. Black-boxing the user: internet protocol over xylophone players (IPoXP). Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts:71—80, 2012.

[3] David R Karger, Matthias Ruhl. Simple efficient load balancing algorithms for peer-to-peer systems. Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures:36—43, 2004.

[4] KM Moller. Increasing Throughput in IP Networks with ESP-Based Forwarding: ESPBasedForwarding. , 2011.

[5] C Pignataro, G Salgueiro, J Clarke. Service Undiscovery Using Hide-and-Go-Seek for the Domain Pseudonym System (DPS). , 2012.

[6] Sandeep Sharma, Sarabjit Singh, Meenakshi Sharma. Performance analysis of load balancing algorithms. World Academy of Science, Engineering and Technology, 38:269—272, 2008.

[7] Emily Wagner, Yoon Jeong, R Stuart Geiger. IPoXP: Internet Protocol over Xylophone Players.

 

Share

LA Machine Learning event on Mining Time Series Data w/ Sylvia Halasz

Last night's LA Machine Learning event on Mining Time Series Data w/ Sylvia Halasz of YP at OpenX Pasadena was quite interesting and well attended. Dr. Halasz spoke about Adaptive Ensemble Kalman Filter and her work on building n-gram correlation with the flu outbreaks. Some of the associated papers follow.

 

IMG00312-20130306-1935

Share

Causality, Probability, and Time - A Temporo-Philosophical Primer to Causal Inference with Case Studies

Causality, Probability and Time by Dr. Samantha Kelinberg is a whirlwind yet original journey of the interdisciplinary study of probabilistic temporal logic and causal inference. Probabilistic causation is a fairly demanding area of study which studies the relationship between cause and effect using the tools of probability theory. Judea Pearl, in his seminal text "Causality: Models, Reasoning, and Inference" refers to this quandary by stating that

(causality) connotes lawlike necessity, whereas probabilities connote exceptionality, doubt, and lack of regularity.

 

Dr. Kelinberg's work provides a balanced introduction to background work on this topic while breaking new grounds on a well-positioned approach of causality based on temporal logic. The envisioning problem is the problem of deducing the set of facts, possibly as the result of our actions leading to the decision problem. This is compounded with finding a timely and useful way to represent our knowledge about time, change, and chance.

CPT_cover

In this ~260 page book, Dr. Kelinberg begins with a brief history of causality leading to Probability, logic and probabilistic temporal logic. The author then defines causality from various different facets, proceeding to causality inference, token causality and then finally the case studies. With practical examples and algorithms, author devises simple mathematical tools for analyzing the relationships between causal connections, inference, causal significance, model complexity, statistical associations, actions and observations.

Exploiting the temporal nature of probabilistic events, Dr. Kelinberg's research is a thought provoking and valuable addition to the scientific community interested in learning causal effects and inference with respect to time. Built upon the works of the likes of Heckerman, Breese, Santos and Young, this book will pave the way probabilistic reasoning researchers think about temporal effects on causality for years to come.

David Hume believed that the causes are invariably followed by their effects: "We may define a cause to be an object, followed by another, and where all the objects similar to the first, are followed by objects similar to the second." So, would you like a well written margin-annotation-laden text which provides formal and practical case study based approach to this somewhat abstract concept of causality? Then look no further!

Share

Selected Papers on Interestingness Measures, Knowledge Discovery and Outlier Mining

  • S.   Hettich    and   S.  D.   Bay.   Kdd   cup   1999  data.       UCI   KDD   Archive
    [http://kdd.ics.uci.edu/      /databases/kddcup99/kddcup99.html],         1999.
  • E.  Suzuki.     Lecture  Notes   in  Computer   Science,   volume  5579/2009,   chapter  Compression-Based    Measures  for Mining  Interesting   Rules,  pages  741-746. Springer  Berlin  /  Heidelberg,  2009.

 

 

Share

Hilary Mason - Machine Learning for Hackers

An interesting beginners talk for machine learning enthusiasts.

Ever tried to use a regular expression to parse an unstructured street address? This talk is an introduction to a few machine learning algorithms and some tips for integrating them where they make the most sense and will save you the most headaches.

Hilary Mason - Machine Learning for Hackers from BACON: things developers love on Vimeo.

Share

On Bayesian Sensitivity Analysis in Digital Forensics

The idea of using of Bayesian Belief Networks in digital forensics to quantify the evidence has been around for a while now. To provide qualitative approaches to Bayesian evidential reasoning in the digital Meta-Forensics is however relatively new in the decision support systems research. For law enforcement, decision support and application of data mining techniques to “soft” forensic evidence is a large area in Bayesian forensic statistics which has depicted how Bayesian Networks can be used to infer the probability of defense and prosecution statements based on forensic evidence. Kevin B. Korb and Ann E. Nicholson's study on Sally Clark is Wrongly Convicted of Murdering Her Children and Linguistic Bayesian Networks for reasoning with subjective probabilities in forensic statistics gives an insight into an important development which helps to quantify the meaning of forensic expert testimony for "strong support".

The IEEE paper on Sensitivity Analysis of a Bayesian Network for Reasoning about Digital Forensic Evidence published in 3rd International Conference on Human-Centric Computing (HumanCom), 2010 is of particular interest since it has a comprehensive real-world list of evidence items and hypothesis.

Bayesian network representing an actual prosecuted case of illegal file sharing over a peer-to-peer network has been subjected to a systematic and rigorous sensitivity analysis. Our results demonstrate that such networks are usefully insensitive both to the occurrence of missing evidential traces and to the choice of conditionalevidential probabilities

one of the co-authors Dr. Overill has also covered grounds for A Complexity Based Forensic Analysis of the Trojan Horse Defence.

The evidence nodes are follows.

  • Modification time of the destination file equals that of the source file
  • Creation time of the destination file is after its own modification time
  • Hash value of the destination file matches that of the source file
  • BitTorrent client software is installed on the seized computer
  • File link for the shared file is created
  • Shared file exists on the hard disk
  • Torrent file creation record is found
  • Torrent file exists on the hard disk
  • Peer connection information is found
  • Tracker server login record is found
  • Torrent file activation time is corroborated by its MAC time and link file
  • Internet history record about the publishing website is found
  • Internet connection is available
  • Cookie of the publishing website is found
  • URL of the publishing website is stored in the web browser
  • Web browser software is available
  • Internet cache record about the publishing of the torrent file is found
  • Internet history record about the tracker server connection is found
  • The seized computer was used as the initial seeder to share the pirated file on a BitTorrent network

while the following hypothesis stand.

  • The pirated file was copied from the seized optical disk to the seized computer
  • A torrent file was created from the copied file
  • The torrent file was sent to newsgroups for publishing
  • The torrent file was activated, which caused the seized computer to connect to the tracker server
  • The connection between the seized computer and the tracker server was maintained

 

The authors conclude, exonerating the sparse evidence such that

The sensitivity analysis reported in this paper demonstrates that the BT BBN used in is insensitive to the occurrence of missing evidence and also to the choice of evidential likelihoods to an unexpected degree.

 

 

Our overall finding is gratifying because it implies that the exact choice of values for the inherently subjective evidential likelihoods is not as critical as might have been expected. Values falling within the consensus of experienced expert investigators are sufficiently reliable to be used in the BBN model. Furthermore, our results imply that the inability to recover one or more evidential traces in a digital forensic investigation is not generally critical for the probability of the investigatory hypothesis under consideration.

 

For some reason, this reminded me of a recent read SuperFreakonomics where authors devise a terrorist-algorithm with the following black-box variable.

“What finally made it work was one last metric that dramatically sharpened the aalgorithm.  In the interest of national security, was have been asked to not disclose the particulars; we’ll call it Variable X.

What makes Variable X so special?  

 

 

For one, it is a behavioral metric, not a demographic one.  The dream of anti-terrorist authorities everywhere is to somehow become a fly on the wall in a room full of terrorists.  In one small important way, Variable X accomplishes that.  Unlike most other metrics in the algorithm, which produce a yes or no answer, Variable X measures the intensity of a particular banking activity.  While not unusual in low intensities among the general population, this behavior occurs in high intensities much more frequently among those who have other terrorist markers.

 

 

 

 

This ultimately gave the algorithm great predictive power.  Starting with a database of millions of bank customers, Horsley was able to generate a list of about 30 highly suspicious individuals. According to his rather conservative estimate,  at least 5 of those 30 are almost certainly involved in actitvities.  Five out of 30 isn’t perfect—the algorithm misses many terrorists and still falsley identifies some innocents—but it sure beats 495 out of 500,495.”

Bayesian Belief Networks can definitely serve as a better probabilistic graphical model to achieve a improved visibility and prior/posterior probabilities for such network related algorithm.

 


Share

pgm.HelloWorld() with Wainwright & Jordan

I have recently came across Wainwright & Jordan's paper on exponential families, graphical models, and variational inference and found it to be quite comprehensive and unifying introduction of the topic. Probabilistic graphical models use a graph-based representation as the basis for compactly encoding a complex distribution over a high-dimensional space. If you are familiar with Koller and Friedman's work on Probabilistic Modeling, Wainwright and Jordan's paper would provide a less mathenamtically terse and more unifying view of the area.

Graphical Models, Exponential Families, and Variational Inference

As compared to Pearl's work on Causality, this paper provides a contemporary look at Message-passing Algorithms for Approximate Inference, Connection to Max-Product Message-Passing and detailed insight into Moment Matrices, Semidefinite Constraints, and Conic Programming Relaxation. Due to it's clarity and detailed explanation, the background material on Graphs, hypergraphs, exponential families and duality is definitely worth reading even if you don't need a refresher.

In Lieu of Pearl's polytree approach, Wainwright & Jordan's work discusses Graphical Models as Exponential Families before delving into Computational Challenges with High-Dimensional Models. Later chapters deal with Sum-Product, Bethe–Kikuchi, and Expectation-Propagation, Mean Field Methods, Variational Methods in Parameter Estimation, Convex Relaxations and Upper Bounds, Integer Programming, Max-product, and Linear Programming Relaxations concluding with Moment Matrices, Semidefinite Constraints, and Conic Programming Relaxation. For a computer scientist, it is always interesting to observe the statistical perspective of machine learning. This contemporary insight into Graphical Models, Exponential Families, and Variational Inference was published in Foundations and Trends in Machine Learning which is definitely built upon researchers' earlier work on Variational inference for Dirichlet process mixtures and Variational inference in graphical models: The view from the marginal polytope.

As an appetizer, I would also recommend Bishop's chapter on Graphical Model.

Share
Go to Top