## Teaching Functional Programming to Professional .NET Developers

0

An informative paper by Tomas Petricek of University of Cambridge.

Abstract. Functional programming is often taught at universities to first-year or second-year students and most of the teaching materials have been written for this audience. With the recent rise of functional programming in the industry, it becomes important to teach functional concepts to professional developers with deep knowledge of other paradigms, most importantly object-oriented. We present our experience with teaching functional programming and F# to experienced .NET developers through a book Real-World Functional Programming and commercially offered F# trainings. The most important novelty in our approach is the use of C# for relating functional F# with object-oriented C# and for introducing some of the functional concepts. By presenting principles such as immutability, higher-order functions and functional types from a different perspective, we are able to build on existing knowledge of professional developers. This contrasts with a common approach that asks students to forget everything they know about programming and think completely differently. We believe that our observations are relevant for trainings designed for practitioners, but perhaps also for students who explore functional relatively late in the curriculum.

Honorable mention to A Look at F# from C#’s corner

## Poodle & Sandworm

0

In lieu of recently passed National Cyber Security Awareness Month, a shout out to CVE-2014-4114 with MS14-060 as a vulnerability in the OLE package manager can be exploited to remotely execute arbitrary code in Microsoft Windows versions Vista SP2 to Windows 8.1 and in Server 2008 and 2012. Yeah, 2012 too.

and here is to poodle.

POODLE: This dog bites – An infographic by the team at Pluralsight

## GAC Changes in Windows Server 2012

Prior to Windows Server 2012, gacutil is typically used to install DLL files in the Windows Global Assembly Cache (GAC). With Windows Server 2012 unfortunately it's not quite so easy. Being able to simply open the GAC in Explorer and drag/drop is gone (so yeah, no shell!). Also GacUtil.exe is not present on the server by default as part of runtime. In order to use gacutil like earlier versions of Windows, we would need to install the .NET SDK on the server which is not really a good idea (defense in depth; only have runtime on server). Of course copying-pasting gacutil.exe doesn’t work (dependencies).

Since we are all too familiar to.NET versions prior to 4.0, GAC used to be in the c:\windows\assembly window and had a custom shell extension to flatten the directory structure into a list of assemblies. Like mentioned earlier, the shell extension is no longer used for .NET versions 4.0 and up. Since we have .NET 4.5 on server machines, its GAC is stored in c:\windows\microsoft.net\assembly. You just get to see the actual directory structure. Locating the assembly isn't that difficult, start in the GAC_MSIL directory and you should have no trouble locating your assembly there by its name. Locate the folder with the same display name as your assembly. It will have a subdirectory that has an unspeakable name that's based on the version and public key token, that subdirectory contains the DLL.

Therefore, PowerShell is the recommended approach to do the GAC install. Following are the instructions on how to install the dll to GAC in Windows 2012 Server. For EL6, we ended up writing the following powershell script.

Set-location "C:\tmp"
$publish = New-Object System.EnterpriseServices.Internal.Publish$publish.GacInstall("c:\tmp\Microsoft.Practices.EnterpriseLibrary.Common.dll")
$publish.GacInstall("c:\tmp\Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.dll")$publish.GacInstall("c:\tmp\Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.dll")
$publish.GacInstall("c:\tmp\Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.dll")$publish.GacInstall("c:\tmp\Microsoft.Practices.EnterpriseLibrary.Logging.dll")



Happy Coding!

## WCF Handling Nested Attributes in AttributeGroup

tldr; SVCUtil does not Generate Code for Nested Attributes in AttributeGroup; here is the code (github repo) and explanation of a workaround.

Beyond Controlled HelloWorld() samples, interoperability standards are not black and white, rather a process that has shades of gray. If you've worked on consuming 3rd party 'enterprise' API's, you may have encountered problems with flattening of WSDL’s, or when generating a service proxy using svcutil.exe, noticed that not all attribute groups gets generated.  For instance, an attributeGroup wrapping another which contains attributes, those attributes will NOT be generated.

I tried a few things like using the DataContractSerializer but it appears that the attributeGroup is ignored by design. The only workaround appears to be removing the extra attributeGroup wrapping. DataContractSerializer does not recognize the attributeGroup (see Data Contract Schema Reference) and as you have already noticed, Xsd (essentially the XmlSerializer) does not recognize nested attributeGroups.

One Reference workaround that is described here, essentially asks you to replace the attributeGroup elements with the actual attributes.

For example:

<xs:attributeGroup name="myAttributeGroup">
<xs:attribute name="someattribute1" type="xs:integer"/>
<xs:attribute name="someattribute2" type="xs:string"/>
</xs:attributeGroup>

<xs:complexType name="myElementType">
<xs:attributeGroup ref="myAttributeGroup"/>
</xs:complexType>

should be transformed into:

<xs:complexType name="myElementType">
<xs:attribute name="someattribute1" type="xs:integer"/>
<xs:attribute name="someattribute2" type="xs:string"/>
</xs:complexType>

To do this in a repeatable manner, following code provides a good starting point on how to handle the attributeGroup issue. Here is a before and after screenshot of WSDL's.

I created a small C# app to transform the data, and then run svcutil to generate the WSDL. essentially replacing all instances of <attributeGroup ref=”xxx”> with the definitions of <attributeGroupname=”xxx”>, just as described in the link that I provided earlier.

XDocument wsdl = XDocument.Load(inputFile);

IEnumerable<XElement> attributeGroupDefs =
wsdl.Root.Descendants("{http://www.w3.org/2001/XMLSchema}attributeGroup")
.Where(w => w.Attribute("name") != null)
.Select(x => x);

foreach (
XElement r in
wsdl.Root.Descendants("{http://www.w3.org/2001/XMLSchema}attributeGroup")
.Where(w => w.Attribute("ref") != null))
{
string refValue = r.Attribute("ref").Value;

foreach (XElement d in attributeGroupDefs)
{
string defValue = d.Attribute("name").Value;
if (refValue == defValue)
{
IEnumerable<XElement> s =
d.Elements("{http://www.w3.org/2001/XMLSchema}attribute").Select(x => x);
foreach (XElement e in s)
{
}
break;
}
}
}

wsdl.Root.Descendants("{http://www.w3.org/2001/XMLSchema}attributeGroup")
.Where(w => w.Attribute("ref") != null)
.Remove();
wsdl.Save(outFile);



This may require some more tweaking, but it appears have corrected all / most of the attributeGroup issues (I only spot checked this).

Happy Coding!

## P≠NP - A Definitive Proof by Contradiction

Following the great scholarly acceptance and outstanding academic success of "The Clairvoyant Load Balancing Algorithm for Highly Available Service Oriented Architectures, this year I present P Not Equal to NP - A Definitive Proof by Contradiction.

## LyX/LaTeX formatting for the C# code

If you are googling trying to find a good way to insert C# code in LyX, this is where you'd probably end up. MaPePer has provided a very good solution; I have modified it slightly (hiding tabs and removing comments) and following is illustration on how to use it in LyX.

First thing you'd need is a Lyx document (LyxC#CodeListing.lyx). Empty one works well.

Add the following to Preamble (Document-> Settings-> LaTeX Preamble)

\usepackage{color}
\usepackage{listings}

\lstloadlanguages{% Check Dokumentation for further languages ...
C,
C++,
csh,
Java
}

\definecolor{red}{rgb}{0.6,0,0} % for strings
\definecolor{blue}{rgb}{0,0,0.6}
\definecolor{green}{rgb}{0,0.8,0}
\definecolor{cyan}{rgb}{0.0,0.6,0.6}

\lstset{
language=csh,
basicstyle=\footnotesize\ttfamily,
numbers=left,
numberstyle=\tiny,
numbersep=5pt,
tabsize=2,
extendedchars=true,
breaklines=true,
frame=b,
stringstyle=\color{blue}\ttfamily,
showspaces=false,
showtabs=false,
xleftmargin=17pt,
framexleftmargin=17pt,
framexrightmargin=5pt,
framexbottommargin=4pt,
morecomment=[l]{//}, %use comment-line-style!
showstringspaces=false,
morekeywords={ abstract, event, new, struct,
as, explicit, null, switch,
base, extern, object, this,
bool, false, operator, throw,
break, finally, out, true,
byte, fixed, override, try,
case, float, params, typeof,
catch, for, private, uint,
char, foreach, protected, ulong,
checked, goto, public, unchecked,
const, implicit, ref, ushort,
continue, in, return, using,
decimal, int, sbyte, virtual,
default, interface, sealed, volatile,
delegate, internal, short, void,
do, is, sizeof, while,
double, lock, stackalloc,
else, long, static,
enum, namespace, string},
keywordstyle=\color{cyan},
identifierstyle=\color{red},
}
\usepackage{caption}
\DeclareCaptionFont{white}{\color{white}}
\DeclareCaptionFormat{listing}{\colorbox{blue}{\parbox{\textwidth}{\hspace{15pt}#1#2#3}}}
\captionsetup[lstlisting]{format=listing,labelfont=white,textfont=white, singlelinecheck=false, margin=0pt, font={bf,footnotesize}}



In the preamble (Document-> Settings-> LaTeX Preamble)

Now add a program listing block. Hopefully you have the listing package installed otherwise you can always use the listing MikTeX update.

Now add the code to the listing block.

and then Ctrl-R

Happy Lyxing

## Machine Learning - On the Art and Science of Algorithms with Peter Flach

Over a decade ago, Peter Flach of Bristol University wrote a paper on the topic of "On the state of the art in machine learning: A personal review" in which he reviewed several, then recent books, related to developments in machine learning. This included Pat Langley’s Elements of Machine Learning (Morgan Kaufmann), Tom Mitchell’s Machine Learning (McGraw-Hill), and Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations by Ian Witten and Eibe Frank (Morgan Kaufman) among many others. Dr. Flach mentioned Michael Berry and Gordon Linoff’s Data Mining Techniques for Marketing, Sales, and Customer Support (John Wiley) for it's excellent writing style citing the paragraph below and commending "I wish that all computer science textbooks were written like this."

“People often find it hard to understand why the training set and test set are “tainted” once they have been used to build a model. An analogy may help: Imagine yourself back in the 5th grade. The class is taking a spelling test. Suppose that, at the end of the test period, the teacher asks you to estimate your own grade on the quiz by marking the words you got wrong. You will give yourself a very good grade, but your spelling will not improve. If, at the beginning of the period, you thought there should be an ‘e’ at the end of “tomato”, nothing will have happened to change your mind when you grade your paper. No new data has entered the system. You need a test set!

Now, imagine that at the end of the test the teacher allows you to look at the papersof several neighbors before grading your own. If they all agree that “tomato” has no final ‘e’, you may decide to mark your own answer wrong. If the teacher gives the same quiz tomorrow, you will do better. But how much better? If you use the papers of the very same neighbors to evaluate your performance tomorrow, you may still be fooling yourself. If they all agree that “potatoes” has no more need of an ‘e’ then “tomato”, and you have changed your own guess to agree with theirs, then you will overestimate your actual grade on the second quiz as well. That is why the evaluation set should be different from the test set.” [3, pp. 76–77] 4

That is why when I recently came across  "Machine Learning The Art and Science of Algorithms that Make Sense of Data", I decided to check it out and wasn't disappointed. Dr. Flach is the Professor of Artificial Intelligence at the University of Bristol and in this "future classic", he left no stone unturned when it comes to clarity and explainability.  The book starts with a machine learning sampler, introduces the ingredients of machine learning fast progressing to Binary classification and Beyond. Written as a textbook, riddled with examples, foot-notes and figures, this text elaborates concept learning, tree models, rule models, linear models, distance-based models, probabilistic models to features and ensembles concluding with Machine learning experiments. I really enjoyed the "Important points to remember" section of the book as a quick refresher on machine-learning-commandments.

The concept learning section seems to have been influenced by author's own research interest and is not discussed in as much details in contemporary machine learning texts. I also found frequent summarization of concepts to be quite helpful. Contrary to it's subtitle and compared to it's counterparts, the book however is light on algorithms and code, possibly on purpose. While it explains the concepts with examples, number of formal algorithms are kept to a minimum. This may aid in clarity and help avoiding recipe-book-syndrome while making it potentially inaccessible to practitioners. Great at basics, the text also falls short on elaboration of intermediate to advance topics such as LDA, kernel methods, PCA, RKHS, and convex optimization. For instance, in chapter 10 "Matrix transformations and decompositions" could have been made an appendix while expanding upon meaningful topics like LSA and use cases of sparse matrix (pg 327). It is definitely not the book's fault; but rather of this reader expecting too much from an introductory text just because author explains everything so well!

As a text book on On the Art and Science of Algorithms, Peter Flach definitely delivers on the promise of clarity, with well chosen illustrations and example based approach. A highly recommended reading for all who would like to understand the principles behind machine learning techniques.

Materials can be downloaded from here which generously include excerpts with background material and literature references, full set of 540 lecture slides in PDF including all figures in the book with LaTeX beamer source of the above.

## Hacktivity - Software Threat Modeling by Shakeel Tufail

Threat modeling and diversion tactics; a good high level overview on software security.

There are only a handful of threat modeling approaches in the industry which are difficult to implement due to the subjective guidelines. Our training session will focus on best practices and a hands-on approach that will provide attendees a better understanding of how to conduct threat modeling in their organization. Most threat models focus on attackers, we will look at the threat model using trust zones, identifying assets, indirect threats, and ambiguity analysis. We will also speak about secure design concepts and best practices for securing software architecture.

Learning Objectives: At the end of this workshop, participants will be able to:

• Understand the basics of threat modeling software applications
• Understand the meaning of threats, attack vectors, and trust zones