Sunday, February 12, 2017

Heroes turned into Frankenstein (part II)

Part II. This is the second post in a series documenting my exploration of and attempt at learning angular2.  Here is the first post.

Styles.  Once I had my app logging in and could route to specific modules, I had a set of learning tasks I wanted to accomplish.  The first was to apply a set of styles.  The go-to for styling for the lazy is of course Bootstrap.  Someone has already established an angular2 bootstrap 4 angular npm module called ng2-bootstrap that offers up angular2 components.  I decided to use this set of components to build the tour of heroes crud functionality found in the hero-crud module in my app.  For some reason I forgot to do the "create" portion of the module.  Maybe I'll add that later.  Somewhere along the way I decided to look at angular material design and created a small sample app to play with it.  I thought about converting the app to all material design.  I spun up a quick angular-cli app and created a form on a card with a nav bar at top.  I liked the look but it still feels kind of buggy: fields pre-filled by chrome are not recognized by the controls and the placeholder texts writes over the pre-fill text.  I decided to stick with bootstrap for the learning app.

Components.  When looking for components, I am usually thinking about the more complex set of components that offer a lot of data/ functionality such as data tables or treeviews.   I started looking at datatable wrappers in angular.  The two I ended up playing with are ng2-table from the same people to did the ng2-bootstrap, and angular2-datatable.  At this point my backend was only serving up the few heroes I had started playing with as from the original angular tour of heroes.  This data store only has 2 columns and I only populated about 10 rows from the original example.  Knowing I wanted a better set of sample data I fell back to my old example schema I have used in prior posts that has a Company, Department, Employee set of tables.  I usually populate with only a few rows, but I really want to test pagination on these tables.  To create the data, I wrote a data generation app in groovy to spit out a bunch of inserts of randomly generated employees for my employee table.  I next had to add the REST path through the backend app to the Employee data objects and ended up writing a full set of REST operations for employees.  This became the basis for the remainder of the tables based work I did.  I wired in the calls to employee REST service to my two table examples.  I ended up with a good representation of how these two data table approaches work within the angular bootstrap ecosystem.

PrimeNG and Trees.  I took a bit of a side trip into looking at the PrimeNG set of components newly released for angular 2.  I was able to quickly add the PrimeNG table example.  I'm not sure at this point if I am becoming proficient at adding components, but this seemed to come together rapidly for me.  I was pretty happy with the functionality of the PrimeNG datatable and how easy it was to work with.  I added row selection and coded events that responded to the selection.  I also ended up playing with the treeview component which took me down another rabbit hole of how to pull a tree style structure from my REST back end.  I ended up writing a REST service for Company rather than using the Employee service and added a GET operation ("/treenodes") to return JSON tailored for the PrimeNG Tree component.  I look at the popular JSTree jQuery component as the gold standard for trees and it feels like the PrimeNG component, while functional, has room for improvement when compared to JSTree.  I see some attempts at wrapping JSTree for angular 2 but they kind of looked like they were not ready for prime time.

One more.  I feel like I have one more post to go to cover my exploration into angular2 or angular.   I did some form work, side trip into writing events (2-way binding), and cover a lot of ground in websockets and STOMP.   I'll save this for the last post.

Saturday, February 11, 2017

Heroes turned into Frankenstein (part I)

Starting at Tour of Heroes.  In my last post I had just redone my tour of heroes app for the newly released angular (angular2, 4?, whatever).  I had added a spring boot back end for the heroes service and all went well.  But what I really wanted was do a deep-ish dive with angular-cli and the angular router.  This led down a rabbit hole of exploration into the angular ecosystem.  I have arrived at the other end with some interesting (to me anyway) technology findings and an overall appreciation for where angular currently is from a framework maturity level.  This blog entry will document some of that discovery.  The source for client and server is on my github account.

First stop: Security. For some reason I take a somewhat masochistic interest in web security.  I don't really enjoy working in that space but I never feel like I can start even a play project without solving this problem of authentication and authorization.   As a learning exercise I decided to implement a JWT (json web token) approach with the goal of keeping my backend as stateless as possible.

JWT Server.  As there are two sides to this coin, server and client, I started with the server.  I'm using spring boot and was hoping the spring security project provided a module out of the box for JWT.  No such luck.  I ended up utilizing a series of web posts to learn about implementing the JWT handling with spring security.  Pretty much my entire server side JWT handling ended up being lifted from this repo on github.  It uses spring security and plugs in JWT handling in the right spots.  I have a simple User/Authority/UserAuthority set of tables in Postgresql that I use for authn/authz.  I then used the JWT handling code from the repo to manage token creation, handling and validation within a filter.

JWT Client. I next turned to angular.  I had already implemented routing within the app and added a route/component to log in.  I used the angular2-jwt library to help implement the passing of the JWT token on all REST calls to the server.  The general idea with JWT is that once the user is logged in, all http calls have the token attached (cookie or header) from the client side and the server (in my case a filter on spring boot) will pull the token and validate/authenticate the user.  Angular2-jwt accomplishes this on the client by providing an AuthHttp object which takes place of the normal http object through which you make your rest calls.  AuthHttp just wraps http and adds the token to the call.  I currently store the token in local storage.  This has the weird effect of never logging me out.  Somewhere the expiration time is being ignored, but I'll leave that and the large topic of JWT for another blog post.

Angular-cli, routes, and components.  When I first started working on this play application the router code generation had just been pulled from angular-cli with the promise that it would show back up.  Apparently routing had changed from router version 2 to 3 and it was very different and blah.  What it meant for me was that as I generated components for my application using "ng g c my-component",  I would have to manually wire in the routes.  Not really a big deal. But, as I started looking into the routing generation, I discovered that there was a way to lazy load an entire module of components via a routing subset.   This is kind of interesting.  Knowing that your app is gonna be pulling LOTS of javascript, being able to segment off a module that will be downloaded on demand is pretty awesome.  Not only that, but through a massive googling session, I had run across someone mentioning that the generation of the code to pull this off was still in angular-cli but was not documented.  It turn out that you can invoke "ng g m my-module --routing" and angular will generate the module, component, and set up the routes array.  Just a little wiring from the main route would set up the newly generated module to be lazy loaded.  The simplest example of this in my source is the app routing module and comp2 routing module.  As I am new to TypeScript and not what one would call a proficient Javascript developer, I stumbled for over a week as to the export syntax to achieve this load on demand module approach which culminated in my public a-ha moment on Stack Overflow.  There was immense satisfaction in seeing this load on demand actually work in chrome, seeing the webpacked module only pull down when needed.  Good stuff.

More to come.  Just now realizing I have too much to post in just a single blog entry so I'll leave off here.  In my next post I hope to include my work with different grid/table components,  treeview components, angular forms,  bootstrap and material styling, prime-ng components and a whole lot around websockets and STOMP protocol. Maybe 2 more posts!

Part II here.


Friday, December 2, 2016

Angular 2 Tour of Heroes with Spring Boot Rest Service

Blogworthy.  So I finally have a topic that actually matches the description of my blog.  A web app with a back end using groovy and java.  I have been working through a few tutorials to get myself up to speed on Angular 2.  I first reviewed Scott Davis's getting ready for angular video tutorial on Safari Book Online.  Scott does a great job presenting and I really like his energy in the video, but his focus was on staying in the strictly ECMAScript world and I was interested in learning the Angular2 Typescript approach, so I was left wanting more.

Angular and Spring.  I had done the Tour of Heroes tutorial just prior to the GA release for Angular 2, but didn't complete it and I wanted to walk through it again.  This time I completed it and for extra bonus points, I built a spring boot back end to support it.  The results are on Github: tour of heroes and hero-service.  As always Promises and now Observables were a bit of a mental challenge for me.  I also tripped up on the need to set up CORS so my angular app on port 3000 could talk to rest on port 8080.

Saturday, September 3, 2016

Windows 10 and Visual Studio 2015

Background.  In my day job I have moved to a point in my career where I don't do full time development, but help guide the organization and individuals in making technology choices.  One of our developers has over the past year or so developed a new .NET based version of a product that was originally developed in VB 6.0 and java.  VB was originally used for the UI while java was used for our service based code.  As we were/are primarily a java shop and our company perceived that introducing .NET at that point and time was unpalatable to the IT shops that supported our users, these choices made some sense.  Fast forward about 10 years and we see the .NET runtime as pervasive on the desktop and java in that space is somewhat frowned upon from the IT organizations that we deal with.  This is not reflection on java itself which I continue to think has a strong leadership position as an enterprise language/technology platform.  It is more a reflection on the poor perception (warranted or not) that the IT organization hold on supporting the JVM on the client desktops.

.NET Curious. As the developer of this .NET application was walking me through his work, I kept noticing how far Visual Studio has come in terms of providing the developer with the right tools at the right time for solid enterprise level development.  I decided to look into Visual Studio and C# in particular to get a feel for the tool set and environment.  I use Windows 7 on my work machine, but have a Macbook Pro for my home machine.  As the MacBook Pro is the machine that I do most of my research on, I decided to build a VM to host a new Windows 10 / Visual Studio development environment.

VM Building.  Knowing that I would probably want to keep this VM around and might live in it a while, I wanted it to be a good size disk space wise.  I also didn't want to give up precious SSD hard drive space on my mac.  So I bought a 256Gig USB 3.0 thumb drive from Costco dedicated to this VM.  I first put Windows 10 Pro on it, then did the Visual Studio Enterprise install.  I picked a pretty full installation as I want to try out all of the data tools, web tools, and maybe other languages (Python, F#).  The installation took a few hours and I was a little concerned that my multiple reboots always seemed to come up with the hard drive pegged at 100%.  Not sure exactly why this was but my guess is that Windows Cortana and search services started out by indexing everything on the drive and just beat up the hard drive stats.  Once I started working regularly in the VM the hard drive thrash abated and I don't see nearly the number of lengthy pauses as I did upon the initial installations.

JavaFX and WPF.  Part of the reason for my curiosity in researching .NET tools, is that I really wanted to see how rapidly I could develop web enabled applications for the desktop in Visual Studio.  My interest in this area started out with me looking at JavaFX for the same thing in the Java arena.  I really liked the notion of declarative UI with backing MVC code pattern that JavaFX uses.  When I mentioned to colleagues that I was playing with JavaFX, I got a lot of raised eyebrows.  The .NET app that our developer had written used WinForms, but I let the developer know at the start of that project this it was his choice as to whether we use WinForms or WPF.  His familiarity with WinForms was his reason for going that route, but knowing that WPF had a similar approach to JavaFX got me thinking out looking into the .NET tools for developing desktop WPF apps.  For my java development I have used Intellij IDEA since its very early days.  The current version incorporates the JavaFX  Scene Builder for screen building with controls palettes and properties.  It works OK.  I had to restart Scene Builder quite often and it seems to lose its notion of where you were when it lost and gained focus, but overall it was a pretty good experience.  My goal with Visual Studio was to compare the experience within the Microsoft tool suite and WPF to see how it stands up to IDEA/Scenebuilder.

Visual Studio First Impressions.  I am starting out my exploration of Visual Studio by working through a trusted standby reference: Andrew Troelson and Philip Japikse's C# 6.0 and the .NET 4.6 Framework, 7th Edition.  I am relearning C#, which I never really used in production apps myself.  I find myself skimming large sections that really draw out topics that a new comer to the language would want to dive into.   I found a key bindings reference to become familiar with Visual Studio editing functions, but really miss some of the Intellij Idea editing functions that don't seem a counter part in Visual Studio.  I really enjoying being in the Microsoft-everything at your fingertips world.  The MSDN / library docs are great.  One feature that is new to me since the last time I used Visual Studion is the behavior of spinning up a local GIT repo for every new project/solution I create.   I like to pop up my play / learning projects into my Github account and it looks like this will be easy with Visual Studio.  This is a feature that Intellij Idea has long had but I didn't expect Visual Studio to embrace Git/Github at the same level.  I was pleasantly surprised.

Long Lead Time.  It will probably take some time before I am comfortable in Visual Studio writing C# code but so far I am enjoying completing the samples in the Troelson book.  Debugging is fun and easy.  I notice that a performance monitor pops up as I'm running my app and this looks interesting.  I really want to get my arms around the SQL Server data tools as well.  Maybe even introduce Postresql into the mix.  I have a small Department-Employee data table set that I like to use as a play data project that I want to introduce into a learning project.  Lots of learning to do.  I'll continue to blog on this as I go or until my interest moves into another direction!

Saturday, May 14, 2016

Added hiphello to Github, SMTP testing

Hiphello.  Adding a quick note to say that I uploaded my Hello World version of JHipster, hiphello to Github.  Points of interest of the /jdl directory off the route that contains my jhipster-uml jdl model to kick start the app's entity model.  I also have an example of loading 1 row into each of my entities in the /src/main/resources/config/liquibase directory.

Docker.  By the way, a nice way to test this without having to stand up postgres (my db for the app) is to use docker.  If you have docker installed you simply type in the following commands:

./gradlew bootRepackage -Pprod buildDocker

docker-compose -f src/main/docker/app.yml up

Mail.  Another handy tool to test the mail capabilities is FakeSMTP. I ignored mail in my learning app until I needed to change a password.  I realized a quick way to to get this done was to have the change password email send out to drive the user (me) back in with the change password token.  FakeSMPT has a nice GUI to show incoming mail and display it or just watch an incoming mail directory that you specify.  Displaying the email for the GUI invokes your OS handler for files with a ".eml" extension.  In my case it kicked off MS Office's Outlook application which I haven't used for years and was hooked to my gmail account.  It then tried to pull ALL of my gmail mail.  Not good, be forewarned, know what .eml handling will do.  On another note, their is a docker file for FakeSMTP which by default fires up the server mode.  I wouldn't mind seeing FakeSMTP included in the jhipster generator and if I have an opportunity to figure out how to incorporate at a level that makes sense for the jhipster community, I may do the fork/pull request thing.

FakeSMTP Alternatives. @Fotozik on twitter alerted me to Maildev, a javascript/npm package that does essentially the same thing as FakeSMTP but uses a web page to show output emails.  Works nice with HTML email that JHipster kicks out. The Maildev code is at Github.  Installation:
npm install -g maildev
For unit testing the subethasmtp project has a subproject call Wiser.  Wiser allows you to start and test incoming smtp email in a unit test.

Thursday, May 12, 2016

Multi-tenancy

Multi-tenancy and Security.  In reviewing the needs of the learning app I am developing with JHipster, I stumbled on a somewhat large-ish disconnect in terms of security.  JHipster integrates Spring Secirity which offers role based authorization (authz) and a variety of social sign on options.  Its pretty complete right out of the box (that JHipster generates). This is all good, but my app is being built as a multi-tenant app.  When a user logs in, I not only need to make sure they have the right roles, but I also have to determine their association with a "tenant".

Multi-tenancy Model.  For my application I have chose a multi-tenancy model where the application and database are shared by ALL tenants.  It will be up to the code to ensure that I don't co-mingle a particular tenant's data.  In my example from prior articles, I used an Employee - Department set of entities to discuss relationships.  Imagine now that I need to restrict users of my learning app to a single company.  The Company entity is now the driving entity as to what makes up a tenant.  All other non-system entities that are part of the app (Employee, Department) have to relate to the Company such that when a user logs in, they only see the data for their company.  Found a cool tool to do diagramming:



In reviewing how JHipster manages authorizations, I found that the code generated will lock down the back end access at the Rest api level by adding authorization checks at the Rest url level within the SecurityConfiguration class.  I really like this approach.  It frees the remainder of the app from checking roles at each step.  I will still likely include spring security annotations @Pre and @Post Authorize in my high level service methods on my custom services where appropriate.  I went ahead and set ALL the generated Rest apis to ROLE_ADMIN to lock them down.  I really want to dig into the Spring Security expression based access control.  It looks to have some pretty compelling features that might come in handy when locking down data for multi-tenancy.

Data vs. Role Security.  In addition to the Role based auths however, I still have to ensure that my Rest api does not allow data from one tenant (Company) be seen by the user associated with another tenant.   I went ahead and implemented my own version of UserDetails by extending User and populated it in my custom UserDetailsService.  I included the notion of the Tenant in my UserDetails class such that when a user logs in and Spring calls the UserDetailsService, I load the user's default tenant (company) from the database into the UserDetails class.  For now I will rely on passing the UserDetails via parameters to the Service and Repository layers to restrict data for a given user.  I may refactor to use a ThreadLocal but there are many documented disadvantages to this.

JHipster and Groovy.  As I begin to lay in new code, I wanted to start using Groovy.  My plan is to leave the generated Java code AS Java (rather than convert to Groovy), but  any new classes will be Groovy.  One very minor snag is that the Groovy compiler wants all sources, Java and Groovy to be in src/main/groovy.  Added these two lines to the Gradle build to fix:

sourceSets.main.java.srcDirs = []
sourceSets.main.groovy.srcDirs += ["src/main/java"]
Put these lines in, wrote some simple Groovy classes and simple Spock tests, and the compile works fine, both from Gradle and from Intellij Idea.

Wednesday, April 27, 2016

JHipster JDL Notes

JDL.  JDL stands for JHipster Domain Language and is a shorthand notation for describing entities and relationships for the jhipster-uml to consume when generating the domain portion of you JHipster code.  My current JHipster learning project quickly evolved to using this format after a few painful sessions with the question and answer approach of the yoeman jhipster:entity sub generator.  The jdl format is pretty concise and allows you to quickly specify the attributes you need to define entities: properties, types, constraints and relationships.

First Issue Encountered.  Once I had my entities defined in JDL and had the generator working to the point where I had working CRUD for all my entities, I started playing with the functionality.   The first thing I noticed was that all of the entities that had relationships to others (let's say children entities) would always represent the child as the Primary Key ID value of the object.  To illustrate I'll use the example of an Employee - Department relationship where an department has many employees.  When listing the employee, JHipster would show all of the attributes of the employee (name, position, years employed) and then show the numeric id (Primary Key) of the department.  Not a very good user experience.  Of course what I wanted was the department name, both in the listings of the employee and as a drop down selection when editing a single employee.  

First Issue Solved. It makes sense that JHipster used the department id as the data showing the relationship.  After all, I did not tell it WHICH piece of data in the department object would be meaningful (and unique) to the user.  Thankfully the the JDL has a nice facility to not only indicate the relationship between entities, but also indicate the field which would be meaningful to the user, and thus be used to represent the relationship on the parent object when used in the UI. My original relationship section of my JDL looked like this:

relationship ManyToOne {
    Employee {worksInDept} to Department
}
This defines that there are many employees in a department and that the employee entity will contain a property named worksInDept that will be the child department object.  When the relationship is defined like this, JHipster simply defaults to using the department id in representing department in any employee UI areas.  In order to tell JHipster that the property on the department object to be used, I had to make the following change:

relationship ManyToOne {
    Employee {worksInDept(deptName)} to Department
}

Note the add of the deptName property.  Not sure what this construct is actually called in JDL.  It shows up in one of their examples but is not documented in the relationship declaration definition:

relationship (OneToMany | ManyToOne | OneToOne | ManyToMany) {
  <from entity>[{<relationship name>}] to <to entity>[{<relationship name>}]
}

Second Issue Encountered. So the addition of the unnamed attribute in JDL seemed to have solved my first issue. The generated code included additional references in the EmployeeDTO to the new deptName property called worksInDeptDeptName and appeared to use it through to the angular display of employees (lists, edit, etc). However when I ran the app, the mapper did not map the new field on the DTO. It left it null. The EmployeeMapper clearly had a mapping annotation to pull the from the related worksInDept object and put it in worksInDeptDeptName, but it wasn't doing it at run time.

Second Issue Solved. My first thought was that the mapping library was broken. My second thought was that there is no way that wouldn't get fixed before I see it in release. I then started looking at the MapStruct lib and learned that it does its mapping magic by generating code for the mappers at compile time. I happened to be running my code in Intellij from its Spring Boot run configuration and I pretty quickly guessed that it was bypassing the code generation that the Gradle build was doing. I ran the Gradle build and the mappers worked. No more nulls and I get the user friendly department name throughout the Employee UI facilities.

Next Up.  One of my big frustrations with code generation approaches is how to add code and not break the upgrade path.  When a new JHipster version comes out, I have a laborious set of tasks to restore my custom adds (homepage, ssl, etc) after re-running the code generation.  I'll start to think about how to deal with this and write up my findings as I arrive at them.