Thursday, April 29, 2010

Financial Data Models

What is a financial data warehouse model?

A financial data warehouse mode is a predefined business model of the bank. It consists of entities and relationship between these entities. Because it is geared towards use in a data warehouse environment, most of these models also include special entities for aggregation of data and hierarchies.

What objects do they consist of?
The most common objects included in a data warehouse are:
Involved party – a hierarchy which includes both organizations (also the own banking organization) and individuals;
Product – a hierarchy that not only consists of products but also of services and their features;
Location – the address of a party;
Transaction – a transaction made by the client;
Investments and accounting – positions, balances and transaction accounting environment;
Trading and settlement – Trading, settlement and clearing compliance and regulation;
Global market data – Issue information, identifiers, FX rates and corporate actions and statistics;
Common data – Calendars, time zones, classifications.

Using these objects you can create the most typical banking reports, for instance customer attrition analysis, wallet share analysis, cross sell analysis, campaign analysis, credit profiling, Basel II reporting, liquidity analysis, product profitability, customer lifetime value and customer profitability.

Who are the main providers of these models?

There are several main vendors of these models:
· IBM - Banking Data Warehouse Model;
· Financial Technologies International (FTI) – StreetModel;
· Teradata – Financial Management Logical Data Model;
· FiServ – Informent Financial Services Model;
· Oracle – Oracle Financial Data Model.

I m not going into details of these models they might be available on these vendors websites ..

If you are starting a small data warehouse project to model part of the bank, these models are probably not the way to go. The advantage of having a proven ready-to-use model does not outweigh the disadvantages of the high investment, customization to your situation and training that you will need.

If you are thinking of starting a series of data warehouse initiatives that have to lead to a company-wide data warehouse, these models might accelerate the creation of this data warehouse environment considerably. Still, the disadvantages mentioned will still be applicable.



Advantages of financial models

1. They are very well structured, and they are very extensive;
2. They can be implemented quickly and facilitate re-use of models;
3. They are created using proven financial knowledge and expertise;
4. They create a communication bridge between IT specialists and banking specialists;
5. They facilitate the integration of all financial processes;
6. They support multi-language, multi-currency environments;
7.The y come with an extensive library of documentation and release guides;
8.They can be implemented on a variety of platforms;
9.They are completely banking specific.

Disadvantages of financial models

The following are disadvantages of most, but not all, financial models:

1. You will need to put a lot of effort to align your model of the bank with these extensive detailed financial models;
2. They are based in general on banking in the United States or the United Kingdom, which might differ from banking in the Netherlands or elsewhere in Europe;
3. All banking terms are defined, but does your bank agree with these definitions;
4. You will have to have people working for you that have in-depth knowledge of both the banking processes and the model you use. These people are very scarce, and flying in consultants from one of the model’s vendors might be a costly business. Vendor’s who pretend you don’t need a lot of knowledge to use these models are not telling the truth;
5. Because of the interrelated nature of the models and their extensiveness, you will sometimes have to fill objects that you don’t have the data for. This will result in some workaround that is not desirable;
6.You buy the whole model, not pieces of it. For a small project this will result in a lot of overhead and extra cost;
7.Every model comes with a certain set of tools that someone in the department will have to learn;
8.New versions of the models come with changes in the model, which you have to examine to find out the impact on your current models.

The choice of which financial model to use is a difficult one. The models presented are all extensive and could all fit the bill. For particular situations some extensive research will be needed. Hardware and software standards and cost will surely have an impact on this decision.

Wednesday, April 14, 2010

Process - ITIL. COBIT. ISO 9000. Sarbanes-Oxley. Even PMBOK

Processes and models come in many flavors, shapes and sizes. Whether they advocate better quality management, better project management, better corporate governance or better audit-ability and control, their fundamental motivation--at least theoretically--is to, well, make things better. Models don’t start out with the underlying intent of making things worse. That would be unproductive, irrational and entirely unhelpful. The principle is that the model provides a better way of managing than whatever came before

Something very curious has happened in the implementation of countless models that have been implemented under the guise of “making management better”. In many instances, the result has been far from an improvement. The reality is that many implementations have made things worse.

Ironic? Certainly. Unhelpful? Unquestionably. But why? What is it that organizations are doing that takes a well-intentioned, well-meaning and purportedly well-crafted model and turns it into something that is considered bureaucratic, ill-guided and--in a couple of noteworthy instances--downright evil? And what can we do differently that will enable positive results, rather than haunted cries of “not again”?!?

The COBIT framework was developed by the IT Governance Institute, a self-described “research think tank” that was established in 1998 to support the improvement of IT governance. While the purpose is to define what an effective, architecturally driven means of managing IT that supports the enterprise is, the emphasis of COBIT is on controls, not processes. In other words, it doesn’t define how activities and initiatives should be done, but instead what controls should be in place to ensure that functions are being performed correctly.

Once the decision was made to adopt COBIT, however, the resulting activities quickly descended into the creation of a vast amount of rigor, oversight and bureaucracy that went far beyond where anyone in the organization expected or valued. Despite the lack of expectation or perceived value, however, the organization still proceeded down the path it had set for itself. Why didn’t it adjust its course, or even stop? How did this take on a life of its own? And how can future organizations learn a lesson from this experience and not do the same thing next time?

When looking at how industry standard models and frameworks are adopted, there are a number of traps that organizations allow themselves to fall into, which collectively can lead to the same slippery slope that the organization described above found itself on:

Because it’s the right thing to do. As noted, no one implements a model for the sake of it, or simply for the sake of creating bureaucracy. The models that exist do so for a reason. Creating visibility and momentum around this model or that, however, requires marketing and selling. Books are written, conferences staged and consultants bray that organizations that fail to adopt this model or that are at best misguided and at worse “doomed to fail”.

We’re just dealing with growing pains. Once an organization has made the choice to adopt this model or that framework, the implementation necessarily requires effort. Adoption and use requires that much more work. The literature on change management and implementation quite rightly points out the productivity impacts that can be encountered when adopting a change. When faced with the pains of adoption, however, legitimate concerns about the relevance and appropriateness of an approach risk dismissal as just growing pains. Rather than objectively asking whether the expressed concerns are legitimate, those raising concerns run the risk of being perceived as naysayers and “not on board”.

The technical imperative trumps the organizational need. Models are theoretically adopted to deliver business value. The implementation of any improvement initiative is frequently tied to a promise of improved business results that is in fact sold to the business. Like the organization described earlier, however, once agreement or adoption takes place, the actual adoption and implementation tends to be driven more by technical rather than business imperatives. The business oversight is assumed to be the decision to proceed in the first place, and the proper level of business scrutiny over what is implemented tends not to occur. The phenomenon of “inmates running the asylum” is far more appropriately the technical side implementing what they think is right, without a regular and necessary check-in with the business side of the organization as to whether or not it makes sense.

All of it, and as rigorously as possible. Models provide choices and alternatives. A careful reading of the introduction to the PMBOK, for example, reveals that there isn’t an expectation that every aspect is relevant for all projects. Appropriate and intelligent adaptation and application is essential. Sadly, when implementing a defined model, especially one that has been adopted as a best practice, the presumption is that everything it offers is good, appropriate and valuable. Rather than evaluating trade-offs and choosing what to implement, and how it should be implemented, the default position is that if the model says we should do it, then we should do it. Consequences in terms of the costs of adoption and the diminishing returns of benefits get dismissed in favor of rigorous adherence. After all, if this is what a “best” practice looks like, then any compromise runs the risk of becoming merely good, mediocre or even bad.

Adapting would “undermine the spirit and intent” of the model. Closely related to the presumption that the full model represents the best of all possible implementations is a related assumption: If adaptation were appropriate, the model would already be adapted. Again, the presumption is that because the model is the way it is, its integrity must be preserved. Adaptation is compromise. Compromise is assumed to be sub-optimal. Intelligent application of the model, in the eyes of the true believer, is heresy.

The result of these trends are implementations that are complete, universal and uncompromising in their adherence to what is viewed as “right”, unfortunately losing sight of what is fitting and practical. Models are just that--they are representations of reality. They are not reality, nor are they replacements for reality. They are suggestions of approaches that must be intelligently and reasonably considered by organizations in order to identify what is logical and appropriate, given the culture, context and management style of the organizations adopting them.

What this means is that the project managers and teams that implement models need to take a deep breath before proceeding to really think through what the results will mean for the organization. Often, the kickoff of an improvement effort is participation in a workshop, training course or boot camp to familiarize the team with the model and its purpose. It is at these events where the implementation can take on its sheen of ideology.

After all, the workshops are led by articulate, impassioned and well-meaning advocates for the approach being explored. They believe in what they are teaching and the value the model offers, and they have a host of horror stories to share regarding failures and consequences of incomplete or inappropriate adoption, or of not starting down this past in the first place. While education is fine, the second activity must be a sober reflection of what the implementation will mean for the organization. What fits, and what doesn’t? What makes sense in the context of the organization, and what won’t work? The fundamental question to be asked is how the principles of the model can be adopted and adapted, not how an ideologically pure and perfect version of the model can be shoehorned in and made to fit.

More importantly, organizational oversight is crucial. The executive agreement to adopt and proceed with an implementation requires a level of understanding of what the organization is signing on to when it chooses to proceed. This means that executives need to familiarize themselves with the principles and purposes of the models being considered. More importantly, they need to understand how these principles suit the context of the organization they lead. And most importantly, they need to provide the ongoing oversight of what is proposed to be actually implemented, constantly asking whether what is proposed makes sense, is relevant and will ultimately deliver value.

Models and frameworks abound in today’s marketplace. As organizations take stock of how they are performing, and seek improvement opportunities in the face of an uncertain marketplace, these models become tempting means of short-circuiting and accelerating the real work of improvement. Certainly, models like ITIL and COBIT have a place as a repository of practices and experiences that organizations can consider.

They are not blueprints for improvement, however, nor are they processes that can be adopted wholesale. They are representative principles of what can work. It is up to any organization considering them, however, to figure out what they can do to make them work in their context and environment. As has been said many times before: caveat emptor, let the buyer beware.

Partitioning data (Best Practices) in DataStage E.E. 8.1

In most cases, the default partitioning method (Auto) is appropriate. With Auto partitioning, the Information Server Engine will choose the type of partitioning at runtime based on stage requirements, degree of parallelism, and source and target systems. While Auto partitioning will generally give correct results, it might not give optimized performance. Based on requirements, and these can be optimize within a job and across job flows.

Objective 1

Choose a partitioning method that gives close to an equal number of rows in each partition, while minimizing overhead. This ensures that the processing workload is evenly balanced, minimizing overall run time.

Objective 2
The partition method must match the business requirements and stage functional requirements, assigning related records to the same partition if required.

Any stage that processes groups of related records (generally using one or more key columns) must be partitioned using a keyed partition method. This includes, but is not limited to: Aggregator, Change Capture, Change Apply, Join, Merge, Remove Duplicates, and Sort stages. It might also be necessary for Transformers and BuildOps that process groups of related records.

Objective 3

Unless partition distribution is highly skewed, minimize re-partitioning, especially in cluster or Grid configurations.

Re-partitioning data in a cluster or Grid configuration incurs the overhead of network transport.

Objective 4
Partition method should not be overly complex. The simplest method that meets the above objectives will generally be the most efficient and yield the best performance. Using the above objectives as a guide, the following methodology can be applied:

Start with Auto partitioning (the default).
Specify Hash partitioning for stages that require groups of related records as follows:
· Specify only the key column(s) that are necessary for correct grouping
as long as the number of unique values is sufficient
· Use Modulus partitioning if the grouping is on a single integer key
column
· Use Range partitioning if the data is highly skewed and the key column values and distribution do not change significantly over time (Range Map can be reused)

If grouping is not required, use Round Robin partitioning to redistribute data equally across all partitions.

· Especially useful if the input Data Set is highly skewed or sequential

Use Same partitioning to optimize end-to-end partitioning and to minimize re-partitioning

· Be mindful that Same partitioning retains the degree of parallelism of the upstream stage
· Within a flow, examine up-stream partitioning and sort order and attempt to preserve for down-stream processing. This may require re-examining key column usage within stages and re-ordering stages
within a flow (if business requirements permit).


Across jobs, persistent Data Sets can be used to retain the partitioning and sort order. This is particularly useful if downstream jobs are run with the same degree of parallelism (configuration file) and require the same partition and sort order.