Tuesday, 27 June 2017

High volume, low latency system

We are currently in the process of optimizing the bidding platforms for PocketMath, one of the largest supply of mobile programmatic inventory in the world. With a fleet of 15 to 35 bidders around the world, this platform help to serve 40 billion to 70 billion of requests per day. The latency of the system is pretty good with 95 percentile of response time fall below 2 milliseconds.

The optimization process gives us a precious opportunity to think about which factors are crucial to building a high performance, low latency system.

In this article, let us share the principles that guide our development process.

Background


As some of you may have been familiar with, Real-time bidding (RTB) is a means by which advertising inventory is bought and sold on a per-impression basis, via programmatic instantaneous auction. In this ecosystem, PocketMath is a Demand-side Platform, which helps the buyers to buy impressions from Ad Exchange.


Because all of the buying and selling happen per impression, the latency requirement for RTB is very strict. Most of the Ad Exchanges will not accept any response from DSP after 100 milliseconds. This time constraint is quite tight if we take the network round-trip into consideration. Normally, network transfer contributes more to the total response time than processing bid request. Therefore,  to reduce the timeout risk, DSP will normally self-impose a much lower limit.

Other than fast response time, there is another requirement for DSP is stability. It is a common practice for Ad Exchange to throttle the traffic to DSP if the timeout happens too often.

Architecture Guideline


As a design mistake can be very costly in the long run, it is better to get it right from the starting point.


Knowing the limit


Compare to other domains like FinTech, the non-functional requirement of RTB is quite special that. it enforces the maximum response time very strictly but does not make it mandatory to process every request. However, the server should still send a no bid response when it intends to skip a request. in RTB, skipping processing only causes opportunity lost, which is not too bad compared to what may happen to a mission critical system. However, failing to process a request in time is much worse because it does not only causes loss of opportunity but also wastage of resource.

Therefore, we designed our system to always operate at the optimal throughput regardless of the traffic volume. To ensure each component in the system is functioning at the optimal level, surge protectors are added at the component client so that additional load can be automatically discarded as early as possible.

Every quarter, while reviewing the volume of inventory, we also adjust hardware and calibrate all the rate limiters to keep the system running at the best value for money. In case the load suddenly surges due to a spike of traffic or campaign configuration, the system should still continue to process the traffic at its designed capacity and skip the load it cannot handle. The response time is consistent regardless of the load.


Knowing when to apply microservices architecture


Microservices architecture gives us a lot of flexibility to develop and maintain the system. However, it also adds network latency to time-critical tasks. Therefore, we need to think twice before applying microservice architecture to our system. For time-critical request, it is better to minimize the number of network hops that the information needs to travel before the response can be generated. We keep the component that facing exchanges as a near single monolithic application. It is a huge component with lots of logic and information embedded to process the majority of requests. Only for some requests that the required information is too big to cache or need to be real-time, then this component will make a network to other components in the system. Moreover, this component can operate partially by discarding the requests it cannot handle if some the external components are not available.

Stream Processing


It is a simple fact that centralized architecture won't scale. Therefore, to be scalable, the system architecture should resemble a graphic card design more than a CPU design where the information can be processed in parallel and independently. To achieve that, it is necessary to keep all of the components stateless and the information package to be self-sufficient for processing.

Try your best to avoid any processing that may require shared resource like a physical database. For example, if the data is immutable, we can clone the data to many read-only databases or caches to avoid centralize processing. It is even more crucial to avoid the scenario where the information can not be processed independently like locking by unique indexes.

Eventually, if these conditions cannot be fulfilled,  we should try to reduce the impact by minimizing the common part by applying MapReduce processing or in-memory computing. We should also add some redundancy to the components that handling the bottleneck to minimize the risk.

Auto Recovery


Even after applying all of the good practice, maintaining system stability is still a very challenging task because there are too many unknown factors that can affect the system throughput. For example, average processing time can be highly variable while system performance can be temporarily degraded due to backup, hardware upgrade or intermittent network issue.

The easy way to improve system stability is to increase the redundancy. However, redundancy only works best when the system is so critical that efficiency is not a concern at all. Otherwise, developers should resort to a smarter method to cope with this challenge.

Fortunately, the approach in this use case is pretty straightforward. When being overloaded, you are left with 2 choices, upgrading the hardware or reducing the load. If possible, we should do both. However, autoscaling can be regarded as Devops responsibility but reducing load is a challenge that should be tackled at the architecture level. In order to do that, we need to build a feedback mechanism so that the front end components can slow down or stop responding to new requests when the backend components are overloaded so that the system can go back to the balanced state.

Implementations Guideline


For a high-performance system, the crappy code will be punished as long as it was exposed. Therefore, it is never redundant to optimize your implementation twice before rolling it out to the production. Here are some of our experience with developing the high-performance system.

Monitoring


For whatever purpose, it is always to good practice to implement health check on the system. However, for a high-performance system, the requirement for monitoring is even more sophisticated with the need for collecting insight about system operation. This information can be very crucial for detecting anomalies, preventing crash and system tuning. We should not only care if the implementation works but also how well is the execution.

There are some well-known APM in the market like NewRelic or DataDog that can help us collecting operation metrics and providing alerts when bad things occur. The license of APM may not be cheap but it is highly recommendable to afford one because of the benefit they will bring in the long term.

In addition to APM, it is also a good idea to embed debug API into health check so that developers can do the in-depth investigation on Production environment whenever they need to. This practice has proven to be very useful in resolving outages and troubleshooting user inquiries.

Testing on Production


This practice will surely trigger some concern as it is considered a taboo in the IT world until recently. Simply speaking, the landscape of software industry has changed. In the past, software development is normally a side function of the big corporates with the mission to build domain specific applications. However, nowadays, software development tend to play a much bigger role of transforming life in various startups around the world. In this new role, for an IT project to be successful, the pace of changes may be more important than maintaining the stability of the system. Therefore, we need to make a case by case judgment on the balance between time to market and the quality control.

For high-performance systems, testing in Staging is less effective because most of the performance issues only appear under heavy load. It is also difficult to load test them using simulated data because there are too many possible combinations of inputs that may expose hidden bugs. Therefore, similar to what Facebook and Google have done, it is not necessarily harmful to sacrifice a small part of traffic for testing new features. The key requirement for this practice is the ability to identify and contain the damage when it happens.

Understanding Machine

At the beginning of this century, there are many initiatives to make programming easier by isolating the business logic from underlying machine execution. Therefore, the development of complicated application become lots easier with additional layers of abstraction. However, as a side effect, developers manage to go through many projects without collecting fundamental knowledge about underlying execution.

However, if you are lucky enough to work on a high-performance system, this knowledge will be important and relevant again. We have seen tremendous benefit of well-optimized code that makes a good use of hardware to get the work done. This benefit can sometimes come as higher throughput, smaller memory footprint or even more critical outcomes like lower latency and more stable performance. It is easy to see that the latter outcomes are somethings not easy to achieve with more hardware but only better implementation.

At the basic level, developers should understand the underlying implementation of programming language for common syntaxes. For the advanced level, it is important to pick up knowledge about operating system, network and the hardware infrastructure as well.

Understandable Code


The biggest source that contributes to performance or functional bugs in our system is code complexity. It is difficult to add new features if existing codebase is too difficult to understand. Patching mentality will continue to increase the technical debt further until it is almost impossible to avoid making mistake.

Moreover, complicated codes will have a negative impact on CPU utilization as the biggest performance blunder comes from missing CPU cache rather than processing speed. Hence, a high-performance implementation is also a clean, easy to understand and straight-forward implementation.

In-memory computing


In-memory computing is a hot trend recently due to memory getting much cheaper and bigger. In the past, when we need more performance, the most common trick is to increase the concurrency level. Most of the web servers in the past have a high number of CPU cores but a relatively low amount of memory per core. That fact implies that web server role is processing of web request rather than data. Most of the data processing usually happens in the data warehouse rather than application. However, for low latency system, retrieving and processing data remotely is considered too expensive. Therefore, if an application can not churn out higher throughput or lower latency with more memory, it might not be well optimized to utilize all available hardware. In the perfect scenario, the system should reach max CPU and memory utilization at the same time. When under load, if only one of them is the bottleneck while the other still has lots of redundant capacity, then it may be a good indicator of wrong hardware configuration or bad optimization.

In a high-performance system, both CPU and memory should be treated as the precious resource. We should be careful to conserve both with well-known techniques like object pools, primitive types, suitable data structure and efficient implementation.

Conclusion


Developing a low latency and high throughput application requires some special skill sets that not easy to find in the mass market. A common perception is good developers will write performance code. This is true for most of the time. However, many experienced developers, who shine in building other applications but still struggle when dealing with the high-performance system because of old habits and lack of performance consideration in mind. The key point for success in this area should lie in the self-reliant analysis, fundamental understanding and logical thinking.

It is also worth highlighting it is not always better to follow the trend in development world because many new methods are good for some other purposes rather than performance. Therefore, it is good to keep learning new things but should always understand the cost versus benefit for each of them based on your priority.

Saturday, 8 April 2017

MySQL Partition Pruning

Recently, we learned an expensive lesson about MySQL partition pruning. There, it is better to share it here so that others will not repeat our mistake.

Background


In our system, there is a big stats table that does not have primary key and indexes. This table is partitioned, but the lack of indexes often causes the full partition or even full table scan when query. To make things worse, the system still continues writing to this table, making it slower every day.

To fix performance issue, we want to clean the legacy data and add new indexes. However, this is not easy because the table is too big. Therefore, we chose the long approach by migrating only the wanted data from this old table to a new table with proper schema.

Partition by hash


It would have been fine if we only did what we originally intended to do. However, we changed the partition type for convenient and that made the new table slower.

In the original table, the partition is based on a timestamp column that represents the time as a number of hours from epoch. For example, the first second of the year 2017 in GMT is 1483228800 seconds from epoch. To get the number of hours, we divide the number by 3600 to get 1483228800 div 3600) = 412008.

Because of the partition by range type, we need to have a maintenance script that creates the monthly partition for next year. This way of partition is not very ideal because the partition size is big and not even. Hence, we converted monthly to weekly partition but too lazy to define each range and switched from partition by range to partition by hash.

This is a short version of how hash definition will look like if we do the partition by range
PARTITION BY RANGE (hour_epoch)
(PARTITION pOct2016 VALUES LESS THAN (419304),
 PARTITION pNov2017 VALUES LESS THAN (420024) ENGINE = InnoDB,
 PARTITION pDec2017 VALUES LESS THAN (420768) ENGINE = InnoDB,
 PARTITION pMax VALUES LESS THAN MAXVALUE ENGINE = InnoDB)
And this is how the partition definition will look like if we do partition by hash
partition by hash (hour_epoch div 168) partitions 157;
The partition by hash type did more than just shorten the syntax. MySQL will try to split records evenly by applying modulo function to select a partition. However, to make the duration of one partition one week, we divide hour_epoch number by 168 to effectively get week_epoch.

With the new table schema, we were happy with smaller partitions, shorter description, and more indexes.

Performance issue


Because of the huge volume of data, we could not fully migrate data to the new schema to verify performance. We only did the preliminary performance test with the data of 2 weeks and did not detect any performance issue. However, in the final testing, we were surprised to observe mixed result. Most of the queries are faster as expected, but some are slower.

After investigating, we realized that instead of scanning only a few partitions, MySQL does the full table scanning for time range query. It is even stranger that this behavior only happens with the date range smaller than 3 weeks. Totally surprised by this result, we overcame our procrastination to read up MySQL document carefully and realize why.

"For tables that are partitioned by HASH or [LINEAR] KEY, partition pruning is also possible in cases in which the WHERE clause uses a simple = relation against a column used in the partitioning expression"

As the document clearly explained, the partition pruning only works with the equal condition for partition by hash type.  However, we did not detect this issue earlier because of the query optimizer will auto convert range condition to equal condition if the number of distinct values in between of the range condition is short enough. Unfortunately, in our early test, the data of 2 weeks is short enough for the query optimizer to hide the problem from us.

Solution


After learning about the issue, we struggled to find a way to fix the performance issue. There are 2 proposed solutions

  • Trick the query optimizer to do the work by splitting a big range to multiple small ranges, each fit one partition. In this way, the query optimizer will work on each individual small ranges.
  • Rebuild the schema again with the proper partition type. 
The first solution is quick but dirty while the second solution is too time-consuming. Eventually, we almost decided to launch the new table with the first solution until finding a quick way to implement the second solution.

We have dug through MySQL document and learned that re-parititioning is basically a copy and paste operation. However, MySQL also has another command that allows us to do some partition change without too much effort.
ALTER TABLE pt
    EXCHANGE PARTITION p
    WITH TABLE nt;

In this command, MySQL allows us to exchange partition between a table and a partition of another table. Even when this is not a direct exchange between 2 partitions of 2 tables, it is just a matter of inconvenience to do one more middle swap to a temp table.

This is how our partition swapping looks like

ALTER TABLE origin_table EXCHANGE PARTITION p1 WITH TABLE temp_table;
ALTER TABLE final_table EXCHANGE PARTITION p1 WITH TABLE temp_table;

Even though this is not as fast as you may guess as MySQL will do a row by row validation to ensure every record of temp table is elligible for storing in the final table partition. If we use MySQL 5.7, this validation can be turned off by adding "WITHOUT VALIDATION" to the end of the second command.

Because we use Aurora, which only support MySQl 5.6, it still took us 2 days to fully update the partition type. However, this would have been one month if we do not use partition exchange.

Fortunately, we managed to recover from the mistake this time. We hope that you learn from our mistake and do remember to read the document carefully before using any fancy method.



Monday, 2 January 2017

Retrospective

Some folks asked me before that which Agile practice is the most important and my immediate answer is Retrospective. From my own experience, Retrospective plays the biggest role in the success of Agile practicing. Unfortunately, it may not necessarily be a popular practice. This is a bit sad because after trying Agile in different organizations, I see no practice that shows value as early and obviously as the Retrospective. Moreover, it is one of the easiest practice to adopt because it does not require discipline to practice regularly. It can be practiced as little as once a year and still be able to bring the differences.

Why retrospective is so important

Stay true to "agile" spirit

Unless you are hiding a rock, it is hard to ignore the debate about "Agile" versus "agile". Lots of developers are upset with the fact that agile is being seen as a set of ruleset rather than mindset. Unfortunately, trying to adopt agile by following the ruleset may lead to a rigid mindset, which is reversed to Agile manifesto.

Retrospective is not vulnerable to this problem because it is the most flexible practice in Agile. Retrospective stays true to the agile spirit by not specifying the method but only the purpose and benefit of the activity. Therefore, it leaves the team with freedom to conduct the activity in whatever ways that fit. The rule followers still can have it their ways with many techniques available but in general, this practice is very personal. While Planning, User Stories, Backlog and Iteration practices may look pretty the same everywhere, Retrospective is always very unique. Because each team has its own problems and members, following the same format still leads to different outcomes.

First step toward improvement

It is quite obviously that in order to improve, we need to see our weakness and limits. This logic should apply not only to software development but to any other aspect of life as well. Therefore, one of the first thing that one should do before introducing any change is spending time learning about the characteristic of each individual and the dynamic of the team.

The traditional method to understand team through psychology test is overrated. It tends to make teams fall into common stereotypes. It is not that psychology test is a waste of time but in reality, it works better for the individual, especially when the subject of the test is willing to collaborate. Therefore, psychology test is better to be a method of collecting feedback and improvement measurement.

For collecting insights about team dynamic, Retrospective is a more effective method because it is less intrusive. People are normally more comfortable when we ask less and let them talk more about what they are concerning about. Fortunately, that is exactly what Retrospective is about.

Keep a close look at the team well-being

The days where developers need to pray to get a decent job have passed. Nowadays, the demand for good developers is so high that most of the companies turn to headhunters to recruit talents. Hence, it is not only challenging to get more talents, but also to retain talents.

We may not be able to do much if this is paycheck competition. However, job changing is rarely purely paycheck driven. It can be very emotionally difficult to leave a job you love and a caring environment. Therefore, if the leader keeps a close eye on the team and each individual, there will be much greater chance to shield the team from lucrative offers.

How to run retrospective

As mentioned above, a good Retrospective is one that let people voice out their inner concern and thinking. Therefore, anything resembles form filling or interview is counterproductive. The better suggestion should be a flexible format. Retrospective itself need to be interesting and intimate enough to put people in a comfortable zone. Our ultimate goal is to let people share more so that the team can improve.

An effective facilitator needs to know how to stir up the conversation when it goes quiet and be silent when people are having a deep reflection. Any context switching is helpful as well. For example, a retrospective session can be out of office, far from the boss, enjoyable with coffee.

The last thing you need to remember about retrospective is to never ever take any discipline action from what you have learned in retrospective. Otherwise, it will be rightfully viewed as a betrayal of trust. Honestly, this will be the worst thing that can happen to the team.

So, if your team has not had an out of the box, open minded retrospective session for sometimes, please find an opportunity to bring the team to a nice place. I believe you and your team will have a good time.

Wednesday, 28 September 2016

Let's Implement "Login with Github" button

Recently we delivered a simple workshop in Spring User Group Singapore about implementing "Login with Github" button using Spring Boot, Spring Security, OAuth2 and Angular

https://github.com/verydapeng/springular


Saturday, 16 April 2016

Spring Oauth2 with JWT Sample

Sometimes ago, we published one article sharing a custom approach to implementing stateless session in the cloud environment. Today, let explore another popular use case of setting up OAuth 2 authentication for a Spring Boot application. In this example, we will JSON Web Token (JWT) as the format of the OAuth 2 token.

This sample was developed partly based on the official sample of Spring Security OAuth 2. However, we will focus on understanding the principal of the OAuth 2 request. The source code is available at https://github.com/tuanngda/spring-boot-oauth2-demo.git

Background

OAuth 2 and JWT 


We will not go to detail when you may want to use OAuth 2 and JWT. In general, OAuth 2 is useful when you need to allow other people to build front end app for you services. We focus on OAuth 2 and JWT because they are the most popular authentication framework and protocol in the market.

Spring Security OAuth 2


Spring Security OAuth 2 is an implementation of OAuth 2 that built on top of Spring Security, which itself is a very extensible authentication framework.

In overall, Spring Security authentication includes 2 steps, creating an authentication object for each request and applying authorization check depending on authentication.

The first step was done in a multi-layer Security Filter. Depending on the configuration, each layer can help to create the authentication object for web request with basic authentication, digest authentication, form authentication or any custom method of authentication. The client side session we built in the previous article is effectively a layer of custom method authentication and Spring Security OAuth 2 is built on the same mechanism.

Because in this example, our application both provides and consume token, Spring Security OAuth 2 should not be the sole authentication layer for the application. We need another authentication mechanism to protect the token provider endpoint so that resource owner can authenticate himself before getting the JWT token.

For a cluster environment, the token or the secret to sign token (for JWT) suppose to be persisted so that the token can be recognized at any resource server but we skip this step to simplify the example. Similarly, the user authentication and client identities are all hard-coded.

System Design

Overview


In our application, we need to setup 3 components

  • Authorization Endpoint and Token Endpoint to help to provide OAuth 2 token.
  • A WebSecurityConfigurerAdapter, which is an authentication layer with hard-coded order of 3 (according to Dave Syer). This authentication layer will setup authentication and principal for any request that contains OAuth 2 token.
  • Another authentication mechanism to protect Token endpoint and other resources if the token is missing. In this sample, we choose basic authentication for its simplicity when writing tests. As we do not specify the order, it will take the default value of 100. With Spring security, the lower order, the higher priority; so we should expect OAuth 2 come before basic authentication in the FilterChainProxy. Inspecting in IDE proves that our setup is correct.



In the above picture, OAuth2AuthenticationProcessingFilter appear in front of BasicAuthenticationFilter.

Authorization Server Configuration


Here is our config for Authorization and Token Endpoint

@Configuration
@EnableAuthorizationServer
public class AuthorizationServerConfiguration extends AuthorizationServerConfigurerAdapter {

    @Value("${resource.id:spring-boot-application}")
    private String resourceId;
    
    @Value("${access_token.validity_period:3600}")
    int accessTokenValiditySeconds = 3600;

    @Autowired
    private AuthenticationManager authenticationManager;
    
    @Bean
    public JwtAccessTokenConverter accessTokenConverter() {
        return new JwtAccessTokenConverter();
    }

    @Override
    public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception {
        endpoints
            .authenticationManager(this.authenticationManager)
            .accessTokenConverter(accessTokenConverter());
    }
    
    @Override
    public void configure(AuthorizationServerSecurityConfigurer oauthServer) throws Exception {
        oauthServer.tokenKeyAccess("isAnonymous() || hasAuthority('ROLE_TRUSTED_CLIENT')")
            .checkTokenAccess("hasAuthority('ROLE_TRUSTED_CLIENT')");
    }

    @Override
    public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
        clients.inMemory()
            .withClient("normal-app")
                .authorizedGrantTypes("authorization_code", "implicit")
                .authorities("ROLE_CLIENT")
                .scopes("read", "write")
                .resourceIds(resourceId)
                .accessTokenValiditySeconds(accessTokenValiditySeconds)
        .and()
            .withClient("trusted-app")
                .authorizedGrantTypes("client_credentials", "password")
                .authorities("ROLE_TRUSTED_CLIENT")
                .scopes("read", "write")
                .resourceIds(resourceId)
                .accessTokenValiditySeconds(accessTokenValiditySeconds)
                .secret("secret");
    }
}

There are few things worth noticing about this implementation.

  • Setting up JWT token is as simple as using JwtAccessTokenConverter. Because we never specify the signing key, it is randomly generated. If we intended to deploy our application to the cloud environment, it is a must to sync the signing key across all authorization servers.
  • Instead of creating authentication manager, we choose to inject an existing authentication manager from Spring container. With this step, we can share the authentication manager with the Basic Authentication filter.
  • It is possible to have trusted application and not trusted application. Trusted application can have their own secret. This is necessary for client credential authorization grant. Except client credentials, all 3 other grants require resource owner's credential.
  • We allow anonymous for checking token endpoint. With this configuration, the checking token is accessible without basic authentication or OAuth 2 token. 

Resource Server Configuration


Here is our configuration for Resource Server Configuration

@Configuration
@EnableResourceServer
public class ResourceServerConfiguration extends ResourceServerConfigurerAdapter {
    
    @Value("${resource.id:spring-boot-application}")
    private String resourceId;
    
    @Override
    public void configure(ResourceServerSecurityConfigurer resources) {
        resources.resourceId(resourceId);
    }

    @Override
    public void configure(HttpSecurity http) throws Exception {
         http.requestMatcher(new OAuthRequestedMatcher())
                .authorizeRequests()
                 .antMatchers(HttpMethod.OPTIONS).permitAll()
                    .anyRequest().authenticated();
    }
    
    private static class OAuthRequestedMatcher implements RequestMatcher {
        public boolean matches(HttpServletRequest request) {
            String auth = request.getHeader("Authorization");
            // Determine if the client request contained an OAuth Authorization
            boolean haveOauth2Token = (auth != null) && auth.startsWith("Bearer");
            boolean haveAccessToken = request.getParameter("access_token")!=null;
   return haveOauth2Token || haveAccessToken;
        }
    }

}

Here are few things to take note:
  • The OAuthRequestedMatcher is added in so that the OAuth filter will only process OAuth 2 requests. We added this in so that an unauthorized request will be denied at the Basic Authentication layer instead of OAuth 2 layer. This may not make any difference in term of functionality but we added it in for usability. For the client, they will receive 401 HTTP Status with this new header versus the old header:
    • WWW-Authenticate:Basic realm="Realm"
    • WWW-Authenticate:Bearer realm="spring-boot-application", error="unauthorized", error_description="Full authentication is required to access this resource"
  • With the new response header, a browser will auto prompt user for the username and password. If you do not want the resource to be accessible by any other authentication mechanism, this step is not necessary.
  • Some browsers like Chrome like to send OPTIONS request to look for CORS before making AJAX call. Therefore, it is better to  always allow OPTIONS requests.

Basic Authentication Security Configuration


As mentioned earlier, because we need to protect the token provider endpoint.

@Configuration
@EnableGlobalMethodSecurity(prePostEnabled = true)
@EnableWebSecurity
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
    
    @Autowired
    public void globalUserDetails(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication().withUser("user").password("password").roles("USER").and().withUser("admin")
                .password("password").roles("USER", "ADMIN");
    }
    
    @Override
    protected void configure(HttpSecurity http) throws Exception {
     http
        .authorizeRequests()
            .antMatchers(HttpMethod.OPTIONS).permitAll()
            .anyRequest().authenticated()
            .and().httpBasic()
            .and().csrf().disable();
    }
    
    @Override
    @Bean
    public AuthenticationManager authenticationManagerBean() throws Exception {
        return super.authenticationManagerBean();
    }
}

There are few things to take note:

  • We expose the AuthenticationManager bean so that our two authentication security adapter can share a single authentication manager.
  • Spring Security CSRF working seamlessly with JSP but is a hassle for RestAPI. Because we want this sample app to be used as a base for users to develop their own application, we turned CSRF off and add in a CORS filter so that it can be used right away.

Testing


We wrote one test scenario for each authorization grant type following exactly OAuth 2 specifications. Because Spring Security OAuth 2 is an implementation based on Spring Security framework, our interest is veered toward seeing how the underlying authentication and principal are constructed.

Before summarizing the outcome of the experiment, let take a quick look at few notes.

  • Most of the requests to token provider endpoints were sent using POST requests but they include user credential as parameters. Even though we put this credential as part of URL for convenient, never do this in your OAuth 2 client.
  • We created 2 endpoints /resources/principal and /resources/roles to capture the principal and authority for OAuth 2 authentication.
Here is our setup:

User
Type
Authorities
Credential
user
resource owner
ROLE_USER
Y
admin
resource owner
ROLE_ADMIN
Y
normal-app
client
ROLE_CLIENT
N
trusted-app
client
ROLE_TRUSTED_CLIENT
Y


Here is what we find out

Grant Type
User
Client
Principal
Authorities
Authorization Code
user
normal-app
user
ROLE_USER
Client Credentials
NA
trusted-app
trusted-app
No Authority
Implicit
user
normal-app
user
ROLE_USER
Resource Owner Password Credentials
user
trusted-app
user
ROLE_USER

This result is pretty as expected except for Client Credentials. Interestingly, even though the client retrieves Oauth 2 token by client credential, the approved request still does not have any of client authorities but only client credential. I think this make sense because the token from Implicit Grant Type cannot be reused.

Monday, 11 January 2016

Should you mind your own business?

In a recent Lean Coffee retrospective, each member of our team was asked to raise one question or concern about working environment. For me, my burning question is how much should we mind other business. After voicing out my concern, I got subtle response from the team as people did not feel very comfortable expressing their thought on this controversial topic. Even without the possibility of hiding real opinion to be politically correct, it is quite likely that many of us still do not know which attitude is more desirable in our working environment. Talking about personal preference, I have met people that truly believe on opposite sides of standing. Apparently, if the workplace is mixed with both types of people having opposite mindsets, conflicts tend to happen more often.

However, this topic is rarely being discussed in workplace. Therefore, there is often a lack of consensus in the corporate environment regarding how much should we care about other people works. As a consequence, some people may silently suffer while others may feel frustrated with the lack of communication and cooperation.

So, is there a right mindset that we should adopt in our working environment or should we just let each people to have it their own way and hope that they will slowly adapt to each other? Let discuss if we have any viable solution.

Mind your own business

Micro Management

Usually, the culprits of meddling with other people works are the one who do managing job. It is obvious that no one like to be micro-managed, even the micro manager himself. Being told of what to be done in details would give us limited space for innovation and self learning plus a bitter taste of not being trusted.

In real life, micro management rarely be the optimal solution. Even in best scenario, micro management can only help to deliver the project with average quality and create a bunch of unhappy developers. Following instructions rather than depending on own thinking rarely create excellent works. To be fair, it can help to rescue a project if the progress fall below expectation but it will not help much on achieving excellency. It is even worse that manager, who accomplish the jobs by micro management can be addicted to it and find it harder to place trust on his sub-ordinates in the future.

To avoid misunderstanding

It is even harder to interfere with some one else work if you are not happen to be a senior or the supervisor. In reality, an act of good will can be interpreted as an act of arrogance unless one manage to earn reputation in the work place. Even if the contribution bring obvious outcome, not every working environments encourage heroism or rock star programmer. Moreover, some introvert folks may not feel comfortable with the attention as a side effect.

Similar to above, this heroin act is normally more welcome in crisis time but may not be very well received when the project is already stabled.

Or mind the other people business as well

Reaching team goal

To be fair, no one border to care about someone else work if it is not for the sake of the project. Of course, there are many poisonous managers who want to act busy by creating artificial pressure but in this article, let focus on the people who want to do see the project success. These people sometimes walk out of their role profiles and just do whatever necessary to get job done.

Eventually, because project work is still team work, it may be more beneficial for each member of them team to think and focus on common goal rather than individual mission. It is pretty hard to keep the information flow through and the components to integrate smoothly if each member only focus on fulfilling their role. No matter how good is the plan, there will likely be some missing pieces and that missing pieces need to be addressed as soon as possible to keep project moving forward.

The necessary evil to get job done

Many successful entrepreneurs claimed that the secret of their success is delegating the tasks to capable staffs and place trust on them. It is definitely the best solution when you have capable staffs. However, in reality, most of us are not entrepreneurs and the people working on the project are chosen by available resources rather than best resources. There may be a time when a project is prioritized and granted the best resources available, when it is in a deep crisis. However, we would not want our project to go through that.

Therefore, from my point of view, the necessary evil here is the task oriented attitude over people oriented attitude. We should value people and give everyone chance for self improvement but still, task completion is first priority. If the result of work is not the first priority, it is hard to measure and to improve performance. I fell it even more awkward to hamper performance for the sake of human well-being when the project is failing.

So, what is the best compromise?

I think the best answer is balance. Knowing that meddling with other people works is risky but the project success is the ultimate priority, the best choice should be defining a minimal acceptable performance and be ready to step in if the project is failing. Eventually, we do not work to fulfill our task, we work together to make project success. Personal success provide very little benefit other than your own well-being from the corporate point of view. However, don't let this consume you and be addicted with the feeling of being people's hero. Plus, don't raise the bar too high, otherwise the environment will be stressed and people feel less encouraged.

Monday, 6 July 2015

Designing database

Database design has evolved greatly over the last 10 years. In the past, it used to be the database analyst job to fine-tune the SQL query and database index to ensure performance. Nowadays, developers play a much more crucial role to ensure the scalability of data.

The database design task, which was autonomy, now becomes an exciting task, which requires a lot of creativity. In this short article, let's walk through an example of real life issue to see how database design has changed.

Requirements

In this example, our business requirement is to build a database to store property information for the country. At first, we need to store any property and its landlords. If a property is being leased, the system need to store tenants information as well. The system should also record activities of properties, including buy, sell and renting.

As a typical database system, the user should be able to query properties out by any information like address, owner name, district, age,... The system need to serve data for both real time query and reporting purpose.

Analysis

It is pretty obvious that there are few entities in this domain like landlord, tenant, transaction and property. Landlord and tenant can be further analysed as people that acts different roles. For example, one person can rent out his house and rent another house to live, which mean he can be the landlord of one property and the tenant of another. That leaves us with 3 major entities: person, property and transaction. Person and property entities have many to many relations to each other. Transaction entity links to one property and at least one person.

If we group some common attributes like occupation, district, building, it is possible to introduce some other sub-entities that may help to reduce redundancy in information.

The era of relational database

If you are one of a developer that being trapped in the relational database era, the only viable choice for persistence is relational database. Naturally, each entity should be stored in a table. If there are relationship between 2 entities, they are likely to refer to each other by foreign keys.

With this setup, there is zero redundancy and every piece of information has the single source of truth. Obviously, it is the most efficient way in term of storage.

There may be an issue here as it is not easy to implement text searching. Whether 10 years ago or today, text search has never been supported well by relational databases. SQL language provides some wildcard matching in the language itself but it is still very far from a full text search.

Assume that you have completed the task of defining database schema, the fine tuning task is normally the job of database analysts; they will look into every individual query, view, index to increase the performance as much as possible.

However, if the readers have spent years working on relational database, it is quite easy to see the limit of this approach. A typical query may involve joining several tables. While it works well for low amount of records, the solutions seem less feasible when the number of tables increase or the amount of records in each table increase. All kinds of tweaks like data sharding, scaling vertically or adding index only help to increase performance up to a certain level. No magic can help if we are going to deal with hundreds millions of records or joining more than 10 tables.

Extension of relational database

To solve the issue, developers have tried several techniques that may scale better than a traditional relational database. Here are some of them:

Database explosion

This technique reverses the process of normalizing data in relational database. For example, instead of joining property with the building, district or country table, we can simply copy all the column of the relevant records to main table. As a consequence, duplication and redundancy happen. There is no single source of truth for sub-entities like building, district, country. In exchange, the joining part in SQL query is simplified.

Explosion is an expensive process that may take hours or even days to run. It sacrifices space, freshness of data in order to increase real time query performance.

Adding document database

In this technique, relational database is still the source of truth. However, to provide text search, important fields were extracted and stored in a document database. For example, knowing that users will search for people by age, gender and name, we can create document that contains these information plus the record id and store them to Solr or Elastic Search server.

Real time query to the system will first be answered by searching in document database. The answer, which includes bunch of record ids will later be used by relational database to load records. In this approach, document database acts like an external index system that help to provide text search capability.

Storing the whole data to a noSQL database

The final choice is storing data to an object-oriented or document database. However, this approach may add a lot of complexity for data maintenance.

To visualize, we can store the whole property or person to database. The property object contains its owners objects. In reverse, the owner object may includes several property objects. In this case, it is quite a hassle to maintain to set of related objects if the data change.

For example, if a person purchases a property, we need to go to the property object to update owner information and go to that person object to update property information.

Combining relational database and noSQL database

The limits of existing methods

After scanning through the approaches mentioned above, let try to find the limit for each approach.
  • Relational database normalizes data before storing to avoid duplication and redundancy. However, by optimizing storage, it causes additional effort on retrieving the data. Taking consideration that database is normally limit by querying time, not storage, it doesn't seem to be a good trade off.
  • Explosion reverses the normalizing process but it cannot offer fresh data as explosion normally take a long time to run. Comparing running explosion with storing the whole entity object to an object-oriented database, the latter option may be easier to maintain.
  • Adding document database offers text search feature but I feel that it should reverses the options to improve scalability. Document database is faster for retrieval while relational database is better for describing relationship. Hence, it doesn't make sense to send the record ids from document database back to relational database for retrieving records. What may happen if there are millions of records id to be found. Retrieving those records from noSQL database is typically faster than relational database.
  • As mentioned above, when these entities are inter-linked, there is no easy way to separate them out to store to an object-oriented database. 
Proposing combination of relational and noSQL database to store data

Thinking about these limits, I feel that the best way to store data should be combining both relational and document database. Both of them will act as source of truth, storing what they do best. Here is the explanation of how should we split the data.

We store the data similarly to a traditional relational database but splitting the columns to 2 types:
  • Columns that store id or foreign keys to other entity ids ("property_id", "owner_id",..) or unique fields
  • Columns that store data ("name", "age",...)
After this, we can remove any data column from relational database schema. It is possible to keep some simple fields like "name", "gender" if they help to give us some clues when looking at records. After that, we can store the full entities in a document database. We should try to avoid making cross-references in stored documents.

Explain the approach by example

Let try to visualize the approach by describing how should we implements some simple tasks
  • Storing a new property owned by a user
    • Configure JPA to only store name and id for each main entity like person, property. Ignore data fields or sub-entities like building, district, country.
    • Store the property object to relational database. As the result of earlier step, only id and name of the property are persisted. 
    • Update property object with persisted ids.
    • Store property object to document database.
    • Store owner object to document database.
  • Querying property directly
    • Sending query to document database, retrieving back record.
  • Querying property based on owner information
    • Sending query to relational database to find all property that belong to the owner.
    • Sending query to document database to find these property by ids.
In the above steps, we want to store records to relational database first because of the auto id generation. With this approach, we have a very thin relational database that only capture relationships among entities rather than the entities them selves. 

Summary of the approach

Finally, let summarize the new approach
  • Treating main entities as independent records.
  • Treating sub-entities as complex properties, to be included as part of main entities.
  • Storing id, name and foreign keys of main entities inside relational database. The relational database is serving as a bridge, linking independent objects in noSQL database.
  • Storing main entities with updated ids to noSQL database.
  • Any CRUD operation will require committing to 2 databases at the same time.
Pros:
  • Off-load the storing data task from relational database but let it do what it can do best, stores relationships.
  • Best of both worlds with text search and scalability of noSQL database and relations searching of relational database.
Cons:
  • Maintaining 2 databases.
  • No single source of truth. Any corruption happen in one of the two databases will cause data loss.
  • Code complexity.
Possible alternative
  • Storing data to a graph database that offer text search capability. This is quite promising as well but I have not done any benchmark to prove feasibility.

Conclusion

The solutions is pretty complex but I found it is interesting because the scalability issue is solved at the code level rather than database level. By splitting the data out, we may tackle the root cause of the issue and be able to find some balance between performance and maintenance effort.

The complexity of this implementation is very high but there is no simple implementation for big data.