Cloud Archive - Bitwise https://www.bitwiseglobal.com/en-us/blog/tag/cloud/ Technology Consulting and Data Management Services Fri, 28 Mar 2025 05:55:17 +0000 en-US hourly 1 https://cdn2.bitwiseglobal.com/bwglobalprod-cdn/2022/12/cropped-cropped-bitwise-favicon-32x32.png Cloud Archive - Bitwise https://www.bitwiseglobal.com/en-us/blog/tag/cloud/ 32 32 Implementing Fine-Grained Data Access Control: A Complete Guide to GCP Column-Level Policy Tags https://www.bitwiseglobal.com/en-us/blog/implementing-fine-grained-data-access-control-a-complete-guide-to-gcp-column-level-policy-tags/ https://www.bitwiseglobal.com/en-us/blog/implementing-fine-grained-data-access-control-a-complete-guide-to-gcp-column-level-policy-tags/#respond Fri, 28 Mar 2025 05:55:16 +0000 https://www.bitwiseglobal.com/en-us/?p=50231 What you will learn Fundamentals of Fine-Grained Data Access Control Learn how to implement GCP column-level security using policy tags and data masking rules Understand best practices for taxonomies and inheritance structures Discover automated approaches to policy tag management See real-world examples of fine-grained access control The Challenge As organizations scale, implementing data governance becomes ... Read more

The post Implementing Fine-Grained Data Access Control: A Complete Guide to GCP Column-Level Policy Tags appeared first on Bitwise.

]]>
What you will learn
  • Fundamentals of Fine-Grained Data Access Control
  • Learn how to implement GCP column-level security using policy tags and data masking rules
  • Understand best practices for taxonomies and inheritance structures
  • Discover automated approaches to policy tag management
  • See real-world examples of fine-grained access control

The Challenge

As organizations scale, implementing data governance becomes more complicated, especially when cross functional teams from marketing, finance, product and sales work on data initiatives. Increasing use of artificial intelligence, machine learning and now generative AI, makes it even more difficult as legal teams require transparency and scrutiny while data is being accessed by the different teams.

Companies that manage data are facing a big challenge. They need to share important business information with the right people, but they also have to keep sensitive data safe.

This sensitive data includes things like:

  • Personally Identifiable Information (PII): Social Security numbers, tax IDs, addresses, emails, passwords, etc.
  • Financial information: Bank account numbers, financial statements, etc.
  • Medical information: Diagnoses, treatment records, etc.

The old way of controlling who sees what data is too simple. It’s like putting a big lock on a whole table in a library, instead of locking individual books. This can let people see things they shouldn’t.

As companies deal with more and more complicated data, they need a much better way to control who can access what. This is called ‘granular access control’ and it’s becoming essential for keeping data safe.

Here are some statistics from IBM and Verizon’s 2024 data breach reports:

  • The staggering financial impact of data breaches reached a global average of $4.88 million!
  • The most common type of data stolen or compromised was customer PII, at 46%. And it can be used in identity theft and credit card fraud.
  • The majority (around 62%) of data breaches are financially motivated.
  • A significant increase in data breaches compared to previous years.

In my role as a Data Engineer at a leading fintech organization, I encountered significant data governance challenges while managing a petabyte-scale data warehouse. Our team was tasked with implementing comprehensive PII data protection across an extensive data ecosystem comprising over 10,000 tables and 1,000+ ELT processing jobs.

The project presented two critical challenges that required careful consideration and strategic planning:

  • Implementing robust data security measures while ensuring zero disruption to existing data products and maintaining seamless service for our customers.
  • Developing an efficient methodology to discover and classify sensitive data across thousands of tables, followed by implementing appropriate redaction and encryption protocols based on defined sensitivity rules.

The scale and complexity of this undertaking was particularly noteworthy given our active data warehouse environment, which required maintaining business continuity while enhancing security protocols.

The Solution: Column-Level Policy Tags in GCP

What Are Policy Tags?

Policy tags in Google Cloud Platform provide a hierarchical system to define and enforce access controls at the column level. Think of them as intelligent labels that:

  • Define security classifications for data
  • Inherit permissions through a taxonomy structure
  • Integrate with IAM roles and permissions
  • Enable dynamic access control

These policy tags are managed using taxonomies in BigQuery. A Taxonomy in BigQuery acts like a hierarchical container system that organizes your policy tags – think of it as a secure file cabinet where each drawer (category) contains specific folders (policy tags) for different types of sensitive data.

These policy tags are then attached to specific columns in your BigQuery tables to control who can see what data. Dynamic data masking on policy tags allows setting up different masking rules for different roles based on their needs. Such as redaction,nullification or custom user defined function without actual data modified in the table.

For example, a “PII_Taxonomy” might have categories like “High_Sensitivity” containing policy tags for Government IDs and social security numbers, while “Medium_Sensitivity” could contain tags for email addresses and phone numbers.

To solve our challenges, we used policy tags to attach to sensitive data fields and then manage permissions at tag level. This provided us flexibility to implement role based access controls (RBAC) without disrupting any table data, or its end users. See the below flow chart for high level steps.

Business process flow example

With legacy domain understanding and subject matter experts, we defined a list of sensitive data that can be ingested into a data warehouse. We then categorized the list based on compliance and legal terms, as to what are high severity sensitive data fields, low and medium, and their consumption patterns. And used it to create our hierarchical taxonomy structure. See for detailed steps and commands to create taxonomy structure in the implementation guide below.

Then we created a program that identified sensitive data fields and also profiled sample data to confirm its sanity. It also identified what policy tags to attach to a data field. This program gave us a matrix of Table, Column and Policy tag that it needs to be attached.

Then we came up with our final program that actually attached policy tags to tables using bq command line tools such as bq schema to get the latest structure of the table, add policy tags to it and use bq update to attach policy tags to tables in BigQuery.

Because there were 10000+ tables, we released the changes in phases instead of one big bang.

Implementation Guide

Let’s create a taxonomy that categorizes PII sensitive data by severity. Each category can have sub-categories for specific policy tags to be applied to table columns. Refer to the diagram below:

Policy tags
Category tags allow us to manage access control at a higher level, reducing administrative overhead. To maximize effectiveness, define categories that align with your organization’s specific business processes and encompass all forms of sensitive information.
 

Step 1: Create taxonomy with parent Policy tag ‘high’ and its child tag ‘driving_license’ as described in above diagram:

  • Refer this python code from this Jupyter notebook for step by step execution create_taxonomy_and_data_masking
  • After executing the code you should see a taxonomy and policy tag structure as below

Pii Sensitive Taxonomy

  • Repeat the same process to create medium and low category and sub-tag for all required tags.

Step 2: Create data masking rules for policy tag driving_licence

  • Let’s create 2 different masking rules for different teams such as below
  • One for sales team who needs to see only last 4 chars of driving licence
  • Another for analytics team who do not need to see the original value but unique hash for each distinct data value
  • Follow the steps in Jupyter notebook to create these.
  • Once you are done with these you can see the masking rules to your policy tag as below in your policy tag console.

Policy Tag

Step 3: Apply Policy Tags to the Columns with appropriate sensitive data

  • Run bq commands on command-line to attach policy tag to your table
  • Refer commands here – attach_policy_tag_to_column.sh
  • After applying the tag you should be able to see it in Bigquery console table schema

Current Schema

Step 4: Assign IAM Permissions to enable access control, first provide necessary permissions to applicable users.

  • Assign roles/bigquerydatapolicy.maskedReader to your sales user on pii_last_four masking rule
  • Assign roles/bigquerydatapolicy.maskedReader to your analytics user on pii_hash masking rule
  • Assign roles/datacatalog.categoryFineGrainedReader to your users who need to access the raw data
  • Refer set_permissions.sh for gcloud commands and follow notebook

Step 5: Enable Access control

  • If you have data masking rules then this will be automatically enabled, and you cannot disable it. So you must authorize users before enabling masking rules or enforcing access control.
  • If you do not have data masking rules this will be manually enforced from the console as shown below. When you do not have masking rules but enforced access control, users who do not have access to policy tag will get an error if they try to query that field.

Access Control

Step 6: Test data access with different type of users with different roles

  • Sales user with maskedReader role to last 4 would see only last 4 of the driving licenses

Step1

  • Analytics users with maskedReader to hash would see only the hashed version of driving license

Step2

  • Users with FineGrainedReader will be able to access both raw sensitive and non-sensitive data seamlessly

Step3

  • Users without FineGrainedReader or maskedReader role will face error if they select a data column that has a policy tag

Error: Access Denied: BigQuery BigQuery: User has neither fine-grained reader nor masked get permission to get data protected by policy tag “pii-sensitive-taxonomy : driving_license” on column your_dataset.customer_data.customer_driving_license.

  • Users without FineGrainedReader or maskedReader will be still able to access non-sensitive data that is not tagged

Step 6: Implement automated monitoring of policy tag lifecycle and for unauthorized tag removals or modifications, and remediation of potential security gaps.

Results and Benefits

  • Enabled selective data access control at column level, allowing organizations to protect sensitive fields (like tax IDs, credit card numbers) while keeping non-sensitive data (like purchase history) accessible to appropriate users
  • Strengthen regulatory compliance by providing granular control and audit trails for sensitive data access, helping meet both internal policies and external regulations (GDPR, CCPA, etc.)
  • Ensured continuous compliance through automated monitoring of policy tag lifecycle, with real-time alerts for unauthorized tag removals or modifications, enabling prompt remediation of potential security gaps
  • Enhanced customer and partner trust by demonstrating robust protection of their sensitive information through precise, documented data access controls
  • Mitigated security risks by preventing unauthorized access to sensitive columns while maintaining business efficiency, replacing the traditional “all-or-nothing” access approach
  • Improved operational efficiency by allowing data analysts to access necessary non-sensitive data without being blocked by overly broad security restrictions

Use phased approach for large data warehouses

  • Prioritize Business Continuity: Implement changes in a phased manner to avoid significant service interruptions and perform thorough impact analysis of downstream applications and ELT pipelines
  • Identify Stakeholders: Determine all users and service accounts that currently access sensitive data.
  • Assess Data Access Patterns: Analyze existing data access methods, such as SELECT * queries and views, to identify potential impacts.
  • Categorize Access Needs: Classify users, groups, and processes based on their required level of access to sensitive information.
  • Implement Gradual Access Control: Before enabling full access control, grant fine-grained permissions to essential users and service accounts.
  • Communicate Changes: Proactively inform affected teams about the upcoming changes and establish clear escalation procedures for incident reporting

Best Practices & Tips Taxonomy Design

  • Create logical groupings based on sensitivity levels
  • Use meaningful, standardised naming conventions
  • Document taxonomy decisions and rationale
  • Regularly audit policy tag assignments
  • Implement least-privilege access principles
  • Monitor and log access patterns

Conclusion

As organizations continue to navigate the complexities of data governance, implementing column-level security through GCP policy tags represents a significant leap forward in protecting sensitive information while maintaining operational efficiency. Our journey through implementing this solution at petabyte scale demonstrates that even large-scale data warehouses can successfully transition to granular access controls without disrupting business operations.

For organizations looking to enhance their data security posture, GCP’s policy tags offer a robust, scalable solution that aligns with modern data governance requirements. The phased approach we’ve outlined provides a practical roadmap for implementation, whether you’re managing thousands of tables or just beginning your data governance journey.

Contact Us to discuss your data governance needs with our experts and determine if GCP policy tagging and dynamic data masking aligns to your objectives.

What’s Next

For users who have already implemented policy tagging and looking for advanced policy tag management, here are some next steps to think of and apply as needed.

Technical Resources

The post Implementing Fine-Grained Data Access Control: A Complete Guide to GCP Column-Level Policy Tags appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/implementing-fine-grained-data-access-control-a-complete-guide-to-gcp-column-level-policy-tags/feed/ 0
Migrating SharePoint to Cloud or Latest On-Premise Version (Part II – Organizational Readiness) https://www.bitwiseglobal.com/en-us/blog/migrating-sharepoint-to-cloud-or-latest-on-premise-version-part-ii-organizational-readiness/ https://www.bitwiseglobal.com/en-us/blog/migrating-sharepoint-to-cloud-or-latest-on-premise-version-part-ii-organizational-readiness/#respond Tue, 11 Aug 2020 14:15:00 +0000 https://www.bitwiseglobal.com/en-us/migrating-sharepoint-to-cloud-or-latest-on-premise-version-part-ii-organizational-readiness/ Internal Buy-in and Readiness Empower your executive sponsor to be a consistently visible part of your change program. This helps in sustained use of the new tool and overall transformation. Bitwise recommends setting up various channels so that executive sponsor constantly communicates broadly about the plan. We usually setup Microsoft Teams live events to broadcast ... Read more

The post Migrating SharePoint to Cloud or Latest On-Premise Version (Part II – Organizational Readiness) appeared first on Bitwise.

]]>

Internal Buy-in and Readiness

Empower your executive sponsor to be a consistently visible part of your change program. This helps in sustained use of the new tool and overall transformation. Bitwise recommends setting up various channels so that executive sponsor constantly communicates broadly about the plan. We usually setup Microsoft Teams live events to broadcast to employees and encourage executive sponsors to use the technology themselves. These channels can be used on top of traditional communications like emails, articles, etc.

Stakeholders

Stakeholders are people who have an interest in and influence over your project, regardless of title.

Identifying stakeholders can often be challenging. Traditionally, it involves reviewing the effect of migration on different lines of businesses and how it cascades through the organizational structure. Bitwise recommends implementing a bottom up approach by studying the effect of change at the individual level and how it moves up the chain. To accelerate this process, we use automated scripts that help in identifying the top contributors, site owners, etc. with the click of a button.

Every stakeholder wants to know “what’s in it for me?” when you start discussing changing the way they work. Too often we make the mistake to start talking about product features and organizational benefits instead of empathizing with their day to day struggle to collaborate, communicate and get work done. This can easily be changed by shifting the center of gravity of the change program to be the stakeholders experience. We usually meet directly with the stakeholders on a regular basis and learn additional information about their business. We listen to their pain points and perceptions so that we can craft communications that will be successful and manage their expectations of what the migration can deliver and how it will change or improve their day to day activities.

Champions

Champions are an invaluable resource to drive change and ensure you have meaningful feedback from your employees. Champions could include key stakeholders. Champions are an extension of your implementation team that provide peer-to-peer learning, feedback and enthusiasm to your change project. Bitwise recommends building a platform for champions to share updates and successes and provide material ready to support their work, as well as build and nurture champion communities so that they can provide departmental and 1:1 support of employees during this transition. This will enable champions to drive messages and communicate the value and benefit throughout the rollout.

Planning the Migration Project

Whether you’re moving from SharePoint 2010 or 2013 to the latest on-premise version or to the cloud, the following steps will help you get started.

Timelines, Budgets and Success Criteria

Instead of going with the big bang approach, Bitwise recommends breaking down the migration project into manageable and logical pieces that can be boxed into fixed timeframes. Using this approach companies see the success earlier. When developing the project plan, it is essential to build in success criteria for measuring whether or not the migration has been successful.

Traditional success criteria of executing a project within given times lines and budget aren’t really a good measure of success. Success criteria should always comprise measures that include the health of the services for a full picture. People may be happy with the intent, but if they cannot get to the desired experience they will ultimately have negative sentiment. Quality, reliability, performance and speed with which issues are resolved must be included in success criteria.

User Adoption and Support

A common mistake in user adoption programs is to tout the benefits of moving to the cloud from the perspective of the organization or its IT department. These are not motivating factors for most employees. Employees are paid to drive results in a particular discipline, and so we must share with them how changes like the implementation of Microsoft Teams or other collaboration and communication solutions will benefit them.

Other benefits like anytime/anywhere access can often sound to an employee like “I have to work anytime and anywhere.” Instead, show the benefit of answering an important chat while in-between customers or while picking up your children from school (though not while driving!). Bitwise recommends using road shows, seminars/webinars, virtual broadcasts, lunch and learn sessions, etc. to help with user adoption.

Providing a support structure is essential to assist employees to adjust to the change and to build technical skills to achieve desired business results. There are various support models that can be used, such as on-call support, in-person support and online support, to help employees through the transformation.

Recap

The success of your migration depends on how well you communicate to internal stakeholders and champions before, during and after the project. Having a communication plan and framework in place will enable effective channels to build buy-in throughout the migration. Identifying the right migration approach and success criteria, and providing the right user support, ensures appropriate expectations are met to achieve successful adoption across the organization. For a complete discussion on migration, readiness for migration and lifecycle of the migration, watch our on-demand SharePoint Migration webinar.

The post Migrating SharePoint to Cloud or Latest On-Premise Version (Part II – Organizational Readiness) appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/migrating-sharepoint-to-cloud-or-latest-on-premise-version-part-ii-organizational-readiness/feed/ 0
Auto Scaling Applications on Pivotal Cloud Foundry https://www.bitwiseglobal.com/en-us/blog/auto-scaling-applications-on-pivotal-cloud-foundry/ https://www.bitwiseglobal.com/en-us/blog/auto-scaling-applications-on-pivotal-cloud-foundry/#respond Fri, 06 Mar 2020 11:27:00 +0000 https://www.bitwiseglobal.com/en-us/auto-scaling-applications-on-pivotal-cloud-foundry/ Setup App Auto Scaler To enable auto-scaling, the application must be bound to the App Auto Scaler service. This service can be instantiated from App manager or using CF CLI (which needs App Auto Scaler plugins for the CLI to be enabled). Application can be bound to App Auto Scaler service using: Apps manager User ... Read more

The post Auto Scaling Applications on Pivotal Cloud Foundry appeared first on Bitwise.

]]>

Setup App Auto Scaler

To enable auto-scaling, the application must be bound to the App Auto Scaler service. This service can be instantiated from App manager or using CF CLI (which needs App Auto Scaler plugins for the CLI to be enabled).

Application can be bound to App Auto Scaler service using:

  • Apps manager User interface
  • CF command-line interface

Configuring Auto Scaler

Once the application has been bound to App Auto Scaler service, it can be configured using various parameters (i.e. auto scaling rules) which we will be seeing in brief below.

Configuring the scaling rules can also be achieved through Apps manager user interface or CF CLI.

The following are some useful CLI commands for configuring auto-scaling which are self-explanatory:

Cloud-Foundry-CLI-Commands

Similar to the `manifest.yml` file, auto-scaling rules can be maintained in app auto scaler YML file as seen below. We can give any name to this file.

Auto-Scaler-YML-file

Here auto-scaling has been configured using the parameter “CPU utilization”. If CPU utilization goes below 40% then CF will scale down the application up to MIN of 2 instances whereas if CPU utilization reaches above 70% then CF will scale to the application up to MAX of 4 instances.

Let’s say we have created a YML file with above mentioned scaling health rule as “demoApp-AutoScalar.yml” at the same level where our build file is. Then we can use the below command for configuring auto-scaling for our app names “DemoMyApp”.

cf configure-autoscaling DemoMyApp demoApp-AutoScalar.yml

I would highly recommend using YML file configuration as it can be maintained alongside of code base and provides advantages considering modern style deployment approach like Blue-Green deployment, etc.

How App Auto Scaler Determines When to Scale

App Auto scalar service determines whether to ramp up / ramp down application instance or maintain the current number of instance(s) by averaging the values of configured metric for the last 120 seconds.

After every 35 seconds, App Auto Scaler service evaluates whether to auto-scale the application or not by following the approach mentioned above.

App Auto Scaler scales the apps as follows:

  • Increment by one instance when any metric exceeds the High threshold specified
  • Decrement by one instance only when all metrics fall below the Low threshold specified

Understanding Auto Scale Health Rule

The table below lists the metrics that you can use App Auto Scaler rules on:

Metric Description rule_type
CPU Utilization Average CPU utilization for all instances of APP memory
Memory Utilization Average memory percentage for all instances of App memory
HTTP Throughput Total App request per second divided number of instances http_throughtput
HTTP Latency Average latency of application response to HTTP request http_latency
Rabbit MQ Depth Queue length of the specified queue rabbitmq

It is very important to understand application performance while applying scaling rules on HTTP Throughput and HTTP Latency. The following points should be considered while applying scale rules on throughput or latency of HTTP requests:

  • Initial number of application instances.
  • Performance benchmarking results of the application (to understand at what load application performance starts to deteriorate) and how many instances are needed to avoid going beyond that load.
  • While calculating HTTP Latency time, any backend service/database communication should also be taken into consideration, and if there is any proportional deterioration in backing services, they should be taken into account so as to not escalate an already deteriorated situation.
  • While setting up the rule on HTTP request, we should consider peak time traffic coming to application which helps to configure auto-scaling in an efficient manner. Your max instances for autoscaling should also be able to accommodate traffic considering the unavailability of other datacenters your app may be hosted on.

While setting up the Rabbit MQ based scale rule, –subtype is a required field which holds name of the queue. For example, as seen below, we can also configure more than 1 rabbitmq queues.

rabbitmq queue

Newer versions of CF also allows to set autoscaling based on a combination multiple metrics, such as those identified below:

CF

With the recent release of CF, we can also create custom metrics based on which we can configure auto scaling for our application.

Schedule Application Auto Scaler

It is best to set up auto-scaling with multiple rules to handle rare scenarios, such as an overnight increase in traffic due to holiday seasons like Thanksgiving. These kinds of occurrences can be scheduled ahead of time.

PCF Auto Scaler provides functionality to schedule “auto-scaling” to handle for rare ‘known’ events which may impact application availability/performance.

This can be achieved from Apps manager. For this go to your deployed application which is bound to app-auto scalar service and select ‘Manage scaling’ and select ‘schedule Limit Change’. Below is the sample rule setup:

Sample rule setup

The above configuration will scale up the application on Nov 14, 2019, at 8 PM and will scale down the application on Nov 15, 2019, at 8 PM.

Challenges While Configuring Auto Scaling

As mentioned in PCF Auto-Scaler known issues official documentation, some of the commands to enable or disable autoscaling from CLI may not be supported in future versions of the CLI, so it is best to stick with the apps manager or the autoscaler API for now.

It is very important while configuring application auto-scaling that we are selecting correct metrices. Improper metrics might result in unexpected results.

Consider the following scenario — it may seem like a good idea to scale on http latency since latency or response time seems like a good indicator of when the application is under load and may need to scale. Say, typically your app is taking 500 ms to respond. If there is a considerable load on the application, you would expect the response time to go up. But that may not always be true. Consider you app is under a DDOS attack. Most of the input coming to the app now is invalid and your app processes them in under 20 ms. If there are 1000s of such requests, it will actually bring down the average response time of your app, and your app may actually scale down instead of scale-up. In such scenarios, it might be better to combine multiple metrics such as CPU, http throughput, http latency or use some custom metric for scaling.

Thus, we have seen that, if used properly, application autoscaling can be an important tool to ensure the reliability and availability of your application. For related information, check out our webinar.

The post Auto Scaling Applications on Pivotal Cloud Foundry appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/auto-scaling-applications-on-pivotal-cloud-foundry/feed/ 0
Blue Green Deployment Strategy for Cloud based Services https://www.bitwiseglobal.com/en-us/blog/blue-green-deployment-strategy-for-cloud-based-services/ https://www.bitwiseglobal.com/en-us/blog/blue-green-deployment-strategy-for-cloud-based-services/#respond Mon, 10 Feb 2020 11:58:00 +0000 https://www.bitwiseglobal.com/en-us/blue-green-deployment-strategy-for-cloud-based-services/ Understanding Blue Green Deployment Strategy This strategy requires we have two versions on the production environment. One is the version that is current and LIVE in Production (this we can call Blue version). Other is the new version which we plan to promote and make LIVE (this we can call Green version). After deployment of ... Read more

The post Blue Green Deployment Strategy for Cloud based Services appeared first on Bitwise.

]]>

Understanding Blue Green Deployment Strategy

This strategy requires we have two versions on the production environment. One is the version that is current and LIVE in Production (this we can call Blue version). Other is the new version which we plan to promote and make LIVE (this we can call Green version).

After deployment of the new version (Green) we will do some health checks and perform sanity tests to ensure this new version is safe to promote to LIVE traffic. Once the green version has been validated, we may choose to switch traffic to Green version. Now the Green version gets all the Live Traffic.

We can choose to keep the Blue version or discard it. At any point during the Blue Green deployment, if the Green version validation fails we can choose to roll back to the previous (Blue) version.

Challenges with Blue Green Deployment Strategy

One of the challenges with this strategy is making the application backward compatible as both the Blue and Green version would be running in parallel. Usually, if there is only application code change this should not be a big deal.

The real challenge comes when the new version of the application requires a database structure change like rename of column or dropping a column. One way to work around this is to design your database changes in a phased manner where the initial change will not modify existing object attributes but will add new ones.

Once everything has been tested a migration can be done. However, this ties the development strategy to the deployment and is one of the challenges that come with Blue Green deployment.

Implementations

Router Based

In router-based approach the traffic flow to the Live version of the service and the new version is controlled and switched via the Cloud Foundry (CF) router. We will try to understand it in a sequence of steps.

  1. Say we have a simple service that gives us the weather for a location. The current version of this service in production is v1. This is the Blue version. Now we want to promote a new version v1.1. This will be the Green version.
blue-green-Deployment-Weather-v1

  1. As you can see the weather API is accessible via the URL weather.demo.com. So any request coming for weather API is routed via the CF router to the current Live version of production (v1). The new version v1.1, though deployed, is not accessible yet via any URL. Now let us make the version accessible via a temporary URL. This can be done through Command Line Interface (CLI) command as below:

    $ cf push green –n weather-green.

    Now any request for weather API via the production URL weather.demo.com continues to be routed to the current production version while the new version will be accessible via the new temporary URL weather-green.demo.com

weather-green-v1

  1. Now the developers and testers can validate the new version via the temporary URL. If validation of the new version is successful, we can also bind the original URL (route) to the new version.

    $ cf map-route green demo.com -n weather

    weather-green-v1.1

    The router now load balances the requests for URL weather.demo.com between version v1 and v1.1 of the Weather API.

  2. After some time, if we are able to verify the new version is running without any problems, we can unmap the production URL from the Blue version (v1). We can also go ahead and unmap and then optionally remove the temporary route mapped to the new version.

    $ cf unmap-route blue example.com -n weather

    $ cf unmap-route green example.com -n weather-green

    weather-demo

    This way we have actually promoted a new version of weather API into production without any downtime.

Service Discovery Based

In service discovery-based approach we use a service registry where services will be registered. Let’s take for example Netflix Eureka service registry. So, consumers of the service will not directly invoke specified endpoint URLs but will lookup URLs for services they want to invoke from the registry and then invoke those URLs.

We first need to make the service instances Discoverable. We do this by enabling Discovery Client with the annotation @EnableDiscoveryClient on the Spring Boot app main class. Before that, we need to add the below dependency into our Spring Boot project.

compile(‘org.springframework.cloud:spring-cloud-starter-netflix-eureka-client’)

spring-version

So when we need to switch traffic between Blue and Green instances it is done by registering of a new version of the service with the same name and unregistering the old version (live version). So consumers continue to invoke the service, in the same way, relying on the service registry to provide it with the service URLs. It can be done in stages as below.

  1. Deploy the new version of the service without registering it in the service registry. This is the Green version. The Live version we will call Blue version. We perform validation tests on the Green version independently.
blue-version

  1. If the tests are good, we register the Green version of the service with the same app name. So now Live traffic goes to both blue and green instance.
blue-green-instance

  1. If everything seems normal, we unregister the Blue version and now live traffic goes only to Green instance.
green-instance

Canary Deployment

A variant of Blue Green deployment is the canary deployment (coarse-grained canary). The main goal of this strategy is to minimize the impact to users due to rolling out an erroneous version of the application into production. This can be explained in steps as below.

  1. Install the application to a server instance where Live production traffic cannot be reached.
  2. After internal validation of the application, we can start to route a small subset of the LIVE traffic to the new version. This can be done at the Router. Say we want to only allow internal company users to first use it and then slowly to users in a city, state or country and so on.
  3. Anytime during this process, if a critical issue is identified we can roll back the new version.
  4. If all looks good, we can route all the traffic to the new version and decommission the old version or hold it for some time as a backup.

This is one way to achieve coarse grained canary deployments without any special setup.

Future Outlook

PCF Native Rolling App Update (Beta)

PCF 2.4 natively supports ZERO Downtime rolling deployment feature. This is however in Beta mode and you would need CLI v6.40 or later to use this feature. However, this is not a full feature Blue-Green deployment process, rather it allows you to perform a rolling app deployment. Below are some of the commands that support this:

Deployment (Zero downtime): cf v3-zdt-push APP-NAME

Cancel deployment (No Zero downtime guarantee): cf v3-cancel-zdt-push APP-NAME

Restart (Zero downtime): cf v3-zdt-restart APP-NAME

However, before using these commands it must be noted that these are in beta phase and there are some limitations of the use. For more information, PCF documentation must be referred.

Native Fine Grained Canary (beta)

PCF is in the process of replacing its go router implementation with service mesh (Istio) based solution. This will allow for lots of new exciting capabilities including weighted routing. Weighted routing natively allows you to send percentage based traffic to the canary app.

We will look at these upcoming capabilities in a future article. For related information, check out our webinar.

The post Blue Green Deployment Strategy for Cloud based Services appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/blue-green-deployment-strategy-for-cloud-based-services/feed/ 0
Merchant Acquirers – Considerations for Cloud Migration https://www.bitwiseglobal.com/en-us/blog/merchant-acquirers-considerations-for-cloud-migration/ https://www.bitwiseglobal.com/en-us/blog/merchant-acquirers-considerations-for-cloud-migration/#respond Tue, 07 Jan 2020 12:00:00 +0000 https://www.bitwiseglobal.com/en-us/merchant-acquirers-considerations-for-cloud-migration/ Traditional Merchant Acquirer Challenges Cost and margin pressures, as well as other challenges faced by merchant acquirers, especially the ‘traditional’ acquirers, are well known and documented in various articles. Also, in the public domain, is the fact that the relatively recent entrants to the merchant acquiring space have been changing the game with Fintech style ... Read more

The post Merchant Acquirers – Considerations for Cloud Migration appeared first on Bitwise.

]]>

Traditional Merchant Acquirer Challenges

Cost and margin pressures, as well as other challenges faced by merchant acquirers, especially the ‘traditional’ acquirers, are well known and documented in various articles. Also, in the public domain, is the fact that the relatively recent entrants to the merchant acquiring space have been changing the game with Fintech style innovation, lower operations costs and rapid enablement of support for newer payment methods and processes. A case in point is the significant difference in the merchant onboarding experience for ‘traditional acquirers’ compared to the ‘fintech’ style acquirers.

All ‘traditional’ acquirers realize the importance of building and enhancing their digital capabilities. Most of these organizations have considered digital transformation as well as cloud adoption as they look to keep pace with innovation in the merchant acquiring industry. However, these tend to manifest as disjointed solutions. For example, with regards to digital, the focus is more on exposing certain services across channels. Likewise, with regards to cloud, the focus is on measures such as migration of data to cloud or even lifting and shifting applications to cloud.

While these measures certainly benefit the organization, the fruits of such labor are limited. Apart from limited reduction in costs and marginal improvement in reliability and scalability, migrating data to the cloud has limited strategic benefits in terms of providing new capabilities like supporting new payment types and or more customer-oriented processes. Likewise, migrating existing applications to cloud “as-is” is primarily aimed at reducing IT operating costs with limited business benefits or value-add from a customer proposition.

What is the way forward?

True value will only be recognized when acquirers re-imagine their product proposition and service delivery. This may include identifying and re-architecting the critical capabilities and services for the digital and cloud era. However, it is vital to consider this as more than just a technology exercise. A fundamental requirement in re-imagining will be adopting customer centricity as a core pillar and applying leading methods such as design thinking and customer journeys for internal and external customers.

Suggested Approach

This is a daunting proposition. Should the organization launch into such an initiative, it will consume the energies of the IT and business teams, leaving limited budget and bandwidth for other BAU initiatives which are also essential. Above all, there is no guarantee that the organization will successfully accomplish all its objectives even after spending substantial time, effort and money.

A more measured approach is to embark on this journey iteratively, rather than ripping and replacing the entire technology landscape in one go. For example, the acquirer can, after due consideration for the product and service proposition going forward, carefully evaluate and identify certain key capabilities for cloud migration which can be candidates for the initial iterations. In arriving at this decision, the following factors may be considered:

  • Can these candidates independently provide business value and test out the hypothesis for the related part of the product/service proposition?
  • The risk profile of the candidate “capabilities” for the initial iterations – will these expose the acquirer to unnecessary risk at the outset itself.
  • Technological complexity – nothing is gained by taking technically simple or very complex challenges.
  • Does it provide all-round exposure in terms of technology/tooling/methods for both the business and technology teams that can be built upon in subsequent iterations?

Let’s examine one of the critical capabilities essential to the merchant acquirer: Dispute handling. Dispute management was long relegated to being labeled a back-office support application and not considered a major service differentiator. Recent industry trends have challenged this wisdom. An efficient dispute management capability is now a key service differentiator. Delays in interactions with the merchants around disputed transactions and their resolution more often than not result in financial losses to merchants and, at times, acquirers too, leading to merchant attrition.

Dispute Management

Traditionally, acquirers had either home-grown or packaged solution for managing disputes. These dispute applications were built either on legacy technologies or even relatively modern frameworks. Such implementations typically suffer from some of the following limitations:

  • High TCO – a substantial portion of the cost may go towards the underlying framework IP and yet incur significant customization costs.
  • Lock-in to some of the technology components used in the framework – in most cases, acquirers are paying for technology features which are unlikely to be fully utilized.
  • Lack of agility since most business-related changes will need code modification of some form.

Rather than migrating the current dispute capabilities as-is to the cloud, it would be opportune for traditional acquirers to apply design thinking and customer-centric principles to reimagine their current dispute management capabilities while migrating to the cloud.

In addition to overall lower TCO, potential benefits include:

  • Remove redundant / duplicated activities, help improve automation levels and reduce turnaround times for the acquirer as well as the merchant.
  • Empower the merchants by providing timely updates about disputes and the ability to interact through the channel of their choice (paper, portal, API, etc.) depending on the merchant’s size and capabilities. In fact, multi-channel capabilities and a consistent interface for dispute management can help acquirers swing large global merchants their way and can be the difference in clinching the deal. And if all of these can be achieved in a consistent manner with reusable components, it reduces the overall support costs.
  • Improved management of disputes, applying past learnings to improve utilization of limited operations resources.

Conclusion

There are no easy options, yet, as clichéd as it may sound, doing nothing is not an option. A strategic approach to cloud migration will serve traditional acquirers well and not only keep them in the fight to stay relevant but empower them to forge ahead. Rather than navigate this path by themselves, merchant acquirers will be well advised to partner with an SI or technology services company who can bring the experience of having helped other organizations in various industries traverse the cloud migration journey.

For a more detailed discussion on an approach for and advantages of moving disputes to the cloud, check out our webinar.

The post Merchant Acquirers – Considerations for Cloud Migration appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/merchant-acquirers-considerations-for-cloud-migration/feed/ 0
5 Business Drivers for migrating your data warehouse to Cloud in 2025 https://www.bitwiseglobal.com/en-us/blog/5-business-drivers-for-migrating-your-data-warehouse-to-cloud-in-2025/ https://www.bitwiseglobal.com/en-us/blog/5-business-drivers-for-migrating-your-data-warehouse-to-cloud-in-2025/#respond Thu, 20 Jun 2019 14:18:00 +0000 https://www.bitwiseglobal.com/en-us/5-business-drivers-for-migrating-your-data-warehouse-to-cloud-in-2021/ The top five business factors that will make moving your data warehouse to the cloud a wise decision in 2023 will be discussed in this blog post. 1. Scalability and Flexibility: Scalability is one of the main advantages of migrating your data warehouse to the cloud. With the use of a data warehouse, your company ... Read more

The post 5 Business Drivers for migrating your data warehouse to Cloud in 2025 appeared first on Bitwise.

]]>

The top five business factors that will make moving your data warehouse to the cloud a wise decision in 2023 will be discussed in this blog post.

1. Scalability and Flexibility:

Scalability is one of the main advantages of migrating your data warehouse to the cloud. With the use of a data warehouse, your company may simply scale up or down its IT needs as needed. For long-term success, businesses are experimenting with a variety of data modeling techniques. Cloud computing once again proves its mettle by being able to grow on demand and adapt to changing requirements because there is no one size fits all answer. Data warehouse modernization offers businesses an infrastructure that meets the purpose as and when necessary without integrating or optimizing difficulties thanks to autonomous scaling or de-scaling of servers, storage, and network bandwidth to manage massive volumes with unprecedented efficiency.

2. Cost-effectiveness:

Moving your data warehouse to the cloud has several compelling commercial reasons, including cost-effectiveness. On-site data warehouses demand hefty initial outlay for technology, software licenses, and ongoing maintenance expenses. In contrast, pay-as-you-go cloud-based data warehousing enables organizations to match expenditures with real consumption. Utilizing the cloud minimizes the risk of underutilized resources, lowers maintenance costs, and eliminates the need to purchase hardware. Further cost optimization is possible because of the variety of price choices provided by cloud providers, including reserved instances and spot instances. You can drastically lower the total cost of ownership while having access to cutting-edge analytics capabilities by moving to the cloud.

3. Design for the present and the future needs:

Using technology to pursue growth and innovation is a great facilitator and accelerator. This includes remaining on top of developments and streamlining all procedures to ensure their dependability. Take into account the benefits of zero-code ETL tools, self-service BI, and DW automation platforms as well as the rate of change in each of these areas. You can confidently satisfy new business requirements at speed and scale because of these cutting-edge platforms and solutions.

4. AI and Advanced Analytics:

In the era of data-driven decision-making, organizations are increasingly depending on AI and advanced analytics to gather insightful data and spur innovation. Platforms for cloud-based data warehousing offer a solid framework for putting sophisticated analytics solutions in place. You may harness the power of predictive and prescriptive analytics to find hidden trends, spot anomalies, and generate data-driven predictions by integrating seamlessly with other cloud services, such as machine learning and AI platforms. Businesses may experiment with various analytics methods and easily scale their infrastructure to meet the rising needs of AI workloads thanks to the flexibility and scalability of the cloud.

5. Data Security and Compliance:

Businesses have always been very concerned about data security and compliance, especially when dealing with sensitive consumer data and legal requirements. The security capabilities of traditional on-premises solutions are frequently surpassed by cloud providers, who make significant investments in installing strong security measures and adhering to industry best practices. You may take advantage of cutting-edge security features like encryption, data masking, identity, and access control, and continuous monitoring by moving your data warehouse to the cloud. To ensure compliance with local and industry rules, cloud providers also go through frequent audits and maintain certifications. You can improve data security and more successfully meet compliance standards by committing your data to a reliable cloud provider.

Conclusion:

In 2023, moving your data warehouse to the cloud will offer a variety of business benefits that will transform your company’s data capabilities. The cloud offers a complete solution to unlock the full potential of your data assets, from scalability and cost-effectiveness to improved performance, advanced analytics, and strong security. Businesses may maintain their agility, take quicker, data-driven choices, and gain new insights for innovation and expansion by using the cloud. To maximize the benefits and overcome any potential obstacles, make sure the migration is well-planned and executed with a smooth transition process.

Getting Started

While the benefits are numerous, and the technology matures, there can be many pitfalls on the path to migrating your data warehouse to a cloud environment. Understanding which platform and strategy can best help you achieve your business goals is a crucial first step. An experienced solutions provider should be able to help you conduct your cloud strategy and assessment to develop an implementation roadmap.

The post 5 Business Drivers for migrating your data warehouse to Cloud in 2025 appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/5-business-drivers-for-migrating-your-data-warehouse-to-cloud-in-2025/feed/ 0