Microsoft PowerApps – Build your Business Apps Faster & Smarter

One Platform- Unlimited Benefits

Traditional approaches to business seems to be collapsing, and companies are trying to develop innovative solutions. Furthermore, in today’s fast paced environment you need tools that work faster, perform better, and can scale with your business.

No matter where we go to a meeting, maybe even on airplanes, work happens on our tablets, laptops and phones. Mobile technology, cloud, skilled expertise and limitless computers have transformed the way we do business. Yet, the apps we use to do business are slow to stay pace with business demand.

While organisations are turning more and more towards SaaS solutions for precise scenarios like HR, hospitality and travel utilizing services like Microsoft Dynamics, Concur or Workday, most business app scenarios still remain bolted on premises, dependent on corporate connected PCs.
`

Too often, they are not easily integrated with other services like virtual meeting tools, HR applications and many more not accessible when and where people need them most – on the system they want to use in that moment. The business application classification is always a step behind consumer applications, the primary reason being the richness and ubiquity that the latter provides.

Microsoft PowerApps has an exclusive answer to these issues. PowerApps is an enterprise service for technology frontrunners enabling them to connect everywhere, create and share business apps with their team on any device in minutes. Additionally, PowerApps benefits anyone in the enterprise to unlock new business agility.

So what exactly are PowerApps?

Fundamentally speaking, Microsoft PowerApps is a Platform as a Service (PaaS). It enables you to create Mobile apps that run on Windows, IOS, Android etc. – with almost any Internet browser. PowerApps is a platform for developing and using custom business apps that connect to your data and work across the mobile and the web, without the expense of custom software development in a short period.
Not just a platform, PowerApps can also be a standalone mobile app as well! Traditionally, mobile app development was all about creating apps for each operating system. This was a headache as it used to triple up an organization’s development time, eventually tripling up the cost. Furthermore, organizations would require more resources to create business apps.
Everything created in the Powerapp will function through and within the PowerApps Mobile App. This reduces the gap between the operating systems and allows you to run your apps. In simple terms, it is a bridge that provides mobile apps an easier pathway to function across mobile platforms.
PowerApps also has the web version. It is the same concept but runs through any modern web browser instead of a mobile app.
The highly productive platform has made its mark in the market and it helps organizations deliver business smarter and faster. Let us look at a few benefits, which form a great user experience and benefits businesses.

One Platform, Unlimited Benefits:

Mobile-First

Microsoft’s PowerApps embodies a withdrawal from its earlier strategy in which PowerApps were designed to be used on mobile devices. It is irrelevant if you use an Apple device, a Windows phone, an Android, or a tablet. You can still utilize an app designed with PowerApps.

Cost effective

For the organizations that outsource their app development this is extremely important. PowerApps enables you to build in-house – a move, which will save your organization from taking a beating. Additionally, this allows your present employees to focus on ensuring that line-of-business users have a unified app experience.

Makes Data easy to Manage

Many organizations have various solutions supporting their business with data stored in different locations. Eventually it comes with its own risk in terms of management and getting all that data working in agreement all the time can prove tough.
With PowerApps, you have the magic of its connectors. It has over 230 of them and they are growing every day. Salesforce, Dropbox, Smartsheet, Oracle are just a few and you can flawlessly use all of these without having to write any code.

Incorporating Multiple Platforms

Incorporating different stages and applications has reliably been a testing task. A few ventures have slowed down due to inability or high expenses related to building the interface for the platforms. With PowerApps and its connectors, organizations can integrate them with multiple platforms. Office 365, Salesforce, Mailchimp and many more can be used effectively and integrated with ease.

Having read all the pros about Microsoft PowerApps, it may seem infallible. However, it also has few cons.

The ‘NO’s’ of PowerApps

PowerApps are essentially Business Mobile Apps – which means internal use. You are not going to make a PowerApps that you can share with everyone. These are not intended for consumer consumption, mostly due to technical limitations it has while sharing with external users and licensing model.
Additionally, majority of the usefulness in PowerApps is “no-code.” So, your in-house designers are restricted and cannot include any custom HTML, JavaScript or add a hackable element to it.

Conclusion

It is crystal clear that PowerApps helps us to create apps platform with ease, which leads to less development time and effort helping organizations to automate their processes. Organizations can connect it to different cloud services like Office 365, Dynamic CRM, Salesforce, Dropbox etc. PowerApps accelerates how business apps are built which results in time efficiency.
Nitor is an early adopter of Microsoft PowerApps. Our development teams are working to utilize PowerApps to develop a range of solutions for businesses. We at Nitor can help your business hop on to the new platform quickly. Our experts can assess and identify the need gaps and recommend the best pathway.

For more information, please contact marketing@nitorinfotech.com

Progress Kinvey – Build Better & Faster Applications for Tomorrow

The current scenario

Mobile existence is indispensable to remain in any game in the long run – a fact, organizations have now learnt, and most associations have built a mobile presence somehow. Whether that presence is a mobile enabled website or a mobile application appears to rely upon differing variables, like the spend strategy, range of abilities, prioritization, and understanding client needs.

A portion of the key activities driving the mobile economy have been broadening or replacing client benefit by means of self-service, expanding field labourer efficiency, going paperless, faster issue resolution at a lower cost, and better client commitment and trust. Many organizations first attempting at mobile presence have missed the mark regarding both business and client expectations, and have been unsuccessful to provide the strategic business value or help attain digital business goals.

A number of organizations lack developer bandwidth, as they are clearly required for fixes, enhancements and to keep up with latest upgrades. Additionally, organizations find it difficult to build a feature rich app experience with the tools, teams and infrastructure on-hand.

Each of these organizations actively sought a better way to achieve their digital business strategy via their mobile apps. They evaluated several approaches and chose Kinvey’s Backend as a Service.

Kinvey – The Future is Bright

Kinvey is a pioneer in mobile Backend as a Service (mBaaS), inventing the category more than six years ago. It uses unified Application Programming Interfaces (APIs) and Software Development Kits (SDKs) to connect mobile, web, and IoT apps to backend resources in the Cloud. Kinvey mBaaS can also be used to federate enterprise and Cloud services and provides common app features such as data mashups, push notifications, business logic, identity management, social networking, integration and location services.
Its sole aim is to reduce Time to Market of new mobile application development by around 50%. Kinvey enables developers by completely decoupling and extracting the server-side infrastructure. Frontend developers have a unique protocol, data format, and query language to access any organization or cloud system.

Benefits for you

Following are some of the benefits Kinvey can offer:

1. Server-less Architecture
Enables deployment on the server-less platform – a developer favourite. It also has Cloud portability – an architect’s first choice.

2. Run in the Cloud
Allows to build and run applications without having to manage the infrastructure on Cloud

3. Secure, Data-Rich Apps
Enables secured, data-rich apps through no-code and low-code enterprise system integration

4. NoSQL Storage
Kinvey uses NoSQL (MongoDB) and allows users to store all types of data like collections (Tables) or Blobs (Files)

Benefits for your developers

• Deliver features and capabilities needed to achieve your business goals faster
• Provide whatever you can imagine without technology or resource constraints
• Ensure that you meet your time to market goals
• Reduce time from ideation to delivery and more enhancements per release
• Create flexibility by allowing the use of any development resource
• Guarantee zero delay in getting your project started and access data from any application or data source from within mobile apps

When software development teams leverage the abilities of the Kinvey platform, the fundamental roadblocks to development agility are cleared and you can gain through agile development processes, including the power of responding to user feedback rapidly and efficiently. With the use of Kinvey, organizations can significantly cut their development release cycles.

Business Value

The business value of Kinvey can be distilled down to some of these factors:

Kinvey provides a fully managed service with pre-built frontend and backend mobile application development accelerators and built-in operational intelligence for rapid troubleshooting of user issues. There is no need for customers to develop their own mobile app delivery foundation, since Kinvey provides all of the services that enable customers to focus on what is important- viz. value added features and rapid response to user issues.

By abstracting future backend system changes through the Kinvey platform, development teams will no longer need to know the nuances of enterprise systems data access paradigms, allowing them to focus 100% on frontend work. Backend engineers will provide controlled access to enterprise systems via a reusable service catalog that developers need to set up just once.

And finally, how is Nitor leveraging the Kinvey platform?

With over 30,000 applications and 85,000 developers in their community, Kinvey is the leading mobile application Backend as a Service (mBaaS) for the digital enterprise.

We at Nitor started with Kinvey by primarily migrating backend for some of our mobile applications. We were amazed at the ease with which we were able to implement it. Nitor’s experienced team by leveraging Kinvey platform helps enterprises create feature rich application with almost 40% to 50% less Time to Market.

Performance Engineering – Ensure Reliable, Scalable and Seamless Application performance

Being a developer involves a lot more than just coding. As extremely distributed applications turn out to be more mind boggling, developers need to guarantee that the end product is easy to understand, secure, and as scalable as possible. With the correct resolution, software teams can categorize possible performance issues in their applications prior in the development cycle and make steady astounding fixes.
Everything from systems administration, frameworks, running a cloud infrastructure, to assembling and analysing more UX information requires your software teams to fuse solid testing methods throughout your application’s development stage.
Effective performance engineering is the way to go forward. Performance engineering does not allude just to a particular role. For the most part, it alludes to the arrangement of abilities and practices that are systematically being comprehended and embraced across organizations, to focus on accomplishing more elevated level of execution in technology innovation, in the business, and for end users.

Why is Performance Engineering important?

Performance engineering entails practices and abilities to build quality and superior execution throughout organization, including functional necessities, security, ease of use, technology platform, gadgets, outsider administrations, the cloud, and more. The goal is to deliver better business value for the organization by discovering possible issues early on in the development cycle.
Performance Engineering is a vital part of software delivery, yet many IT organisations find it an expensive and a challenging thing to do. Despite big performance failures that have been making continuous headlines, Performance Engineering has been unsuccessful in getting the attention and budget spending it deserves in many companies.

How to make the most of performance engineering?

Here are things to keep in mind when incorporating the performance engineering process into your model.

1. Build a Strategy

Building a Performance Engineering approach is a vital part of the process and you need to be sure about how to align it into your organisation and delivery model.
– Identify the SMEs and the touchpoints that you will require in your development lifecycle.
– Comprehend what are the quality gates and in what capacity they will be administered.
Always remember that it all starts with the requirements. If your product owner recognizes what level of performance they want from the system, then it gets easier for engineer to meet the system requirement.

2. Plan the Costing

One thing is for sure, it takes a good sum of amount to build a high-end performance engineering practice. As you are building up your execution guide, you may need to experience various spending cycles with a specific end goal to get all the infrastructure and tools ready.
– Remain solid and positive
– Utilize the failures which organizations faced in the past to persuade the stake holders of the significance of Performance Engineering

3. Classify Crucial Business Workflows

If you do not have information about the right tools then get in touch with the vendor as it can turn out to be costly and time-consuming.

Always remember it is better to spend time on creating workflows that are critical to the business and that have the maximum throughput.

4. Find the Baseline and Test Regularly

The next stage is to benchmark the performance pattern with an arrangement of execution tests. These test can be used on numerous occasions.

– Set up a history of your production runs marked by trends to check for patterns in framework execution. In an ideal scenario, this needs to be done in every release and every integration. If the trend examination can be automated as part of a CI/CD process, nothing like it.

5. Use the best Tools and Hardware

You will require the best possible APM, diagnostic and testing tools for Performance Engineering. It’s imperative that you distinguish things you’ll require and those you won’t, to legitimately run tests and analyse bottlenecks.

Production-like settings are usually costly. Preferably, you’ll have one for your Performance Testing in any case. If you are testing frequently with each deployment, then the pattern in any case point to a bottleneck that the engineers need to be vigilant about.

6. Have Data Strategy in place

As you will test frequently, you should have the capacity to make test information rapid and effective. It is imperative that the information you have is alike to the production environment. Remember, if you are not using representative data set then query plans will be different.


What are the Business Benefits?

As you can clearly see, the above steps are vital when it comes to incorporating a performance engineering process into your business model. These steps ensure that your organization benefits out of it.

Listed below are some of the benefits of performance engineering from an organization’s perspective:
1. Decreased burden: Reduced vulnerability of applications when the anticipated load is high

2. Optimal utilisation of resources through performance engineering: The infrastructure may be over-provisioned or under, PE lets us know the utilisation graphs and helps in making strategic decisions.

3. Guaranteed support: Ensured level of commitment for an application to perform in the given supported criteria

4. Future ready: Helps in taking future decisions for scaling the applications

5. Increased adaptability: Helps in determining the application design and in case if you want to do incremental changes in the applications

What can we conclude?

It is quite clear that performance engineering helps in benchmarking the application performance and allows organizations to identify all business-critical scenarios for performance testing. Additionally, it helps to determine the extent of availability and reliability of the application, while instilling mechanisms to constantly advance application performance.
In short, Performance engineering should be a priority before releasing any software or an application. It should be executed early on in the development phase to catch more bugs in advance and increase user satisfaction while saving you time and money down the line.
Nitor possesses proficiency at providing exquisite user experience through reliable application performance. It encompasses various frameworks and tools in order to test, monitor and streamline performance and optimise infrastructure cost.

To know more please drop us email at marketing@nitorinfotech.com

BDD – Be Agile, Create Value & Build Highly Visible Test Automation

Everybody likes to complete things in their own specific manner. However, when it comes to software programming, it would always be beneficial in having a set of principles for each phase of software development.

Opening the discussion and keeping various technical teams on the same note can allow software to work seamlessly. As organizations move towards coding phase they need to adjust the procedures to fit their present work processes. So what is it that can define user behaviour prior to writing test automation scripts?

That is called BDD (Behaviour Driven Development).

What is BDD?

BDD is a development process, which explains the behaviour of an application for the end user. It is an extension of TDD (Test Driven Development). In BDD, the behaviour of the user is defined and converted to automated scripts to run against a functional code. These test scripts are written in a business readable and domain specific language known as Gherkin, which ultimately reduces a risk of developing a code. Following are some of the points, which clearly outline the value of BDD.

1. BDD is not testing, it is a process of developing a software. It considers questions like ‘where to start in the testing process; what to test and what not to; how much to test in one instance; what to name the tests and how to understand when and why it fails. It is what can be called a rethinking of unit testing and acceptance testing.

2. Before BDD, TDD had tests that were developed first and failed until a functional code was arrived at. This point was when a test was considered to have passed. This enhanced with BDD, where the tests were written in a specific format.

3. Since the language used in BDD was domain specific the requirements are now more real time and meaningful, where all stake holders are on the same page, as opposed to the earlier ‘ only developer and tester friendly’ ones.

4. BDD does not change/replace traditional UI automation tools like selenium or appium.

5. In terms of test automation, it represents a presentation layer, in other words it can present data in a clear-cut manner and in a standardized format.

As you can clearly see, BDD has nothing to do with the technical side of the Testing, let us try to understand why and how BDD is important.

BDD helps bridge the communication gap between clients, developers and other stakeholders.

Collaboration – In traditional testing, nobody would recognize what part of the test/scenario was failing. With the BDD approach, everyone including stakeholders, product team and developers understand testing, making it a win-win situation for organizations.

Requirement Change Management – Traditionally, the requirements clarity were logged in collaboration tools like Jira or other project management tools. With BDD, any changes in requirements would automatically be documented as tests.

Test Management Tools – In traditional method, test management was separate and were automated and manually marked within the test repository. But, with the advent of BDD tools, static metrics such as “#” of specs, “#” of scenarios are collected automatically. Furthermore, other test metrics can effectively be expanded.

Single Source of Truth – Traditionally, requirements would be transferred from project management to test management, and finally to automation. With BDD, in a mature agile process, specs are written correctly in Jira and can serve as source of truth. This is in contrast to testers reading requirements.

Phases of BDD

The overall BDD process involves two important phases – Process insights & Tools/Technologies. Let us look in detail how vital Process insights & Tools/Technologies are when it comes to BDD process.

a.  Process Insights

To benefit from BDD based test automation, it is imperative to have process overarching Planning, BDD Design and Test Automation Framework.

Planning – Priority based stories/features should be picked up for automation.  Undertaking  an iterative discussion helps to know what activities would be beneficial to ongoing automation efforts.  For best results, effort estimation could be followed up by stabilization of test automation activity instead of the usual factory approach, if the product is still evolving.

BDD Design – It is recommended that, scenarios be designed by QA/BA rather than Quality Engineers. This is instinctively due to fact that they are owners of product quality. In addition to this, principle of collaboration mandates that, they own up this part of automation effort.

Also the scenarios should be reviewed from the point of functional flow, behavior semantics and step reusability by all concerned stakeholders – QA, BA and Engineers. Review process should be a de-facto part of design process.

Test Automation Framework – BDD design ensures that reusability is complemented by the implementation component. Standard automation and development practices must be followed to ensure efficient output.

b. Technologies/ Tools

Some automation tools that support BDD are listed below:

Platform BDD Tool
Java JBehave, Cucumber, Gauge
C# SpecFlow
Python Behave
Ruby Cucumber
Javascript GaugeJS
PHP Behat

Apart from automation tools, Test Management based on BDD test designs play an important role. There are tools like TestRail, HipTest, which now support BDD based test editor functionality and guarantee better integration of processes and implementation.

Business Benefits

Once the Process insights & Tools/Technologies are in-sync, BDD automatically offers benefits:

  • Know What You Test (KWYT) – Since testing is not performed in isolation, continuous tracking and reading of what is being tested becomes possible. Coverages cannot be missed and product owners can now chip in proactively if something is being missed.
  • High Visibility – Due to collaboration, the tests, their quality and their results are visible to all management stakeholders which gives confidence in taking decisions for product releases.

Conclusion

Behaviour Driven Development helps in building quality and creating value. Instead of having tests that are only useful for engineers, BDD aims at tests useful for all. Additionally, it improves the partnership between the parties and allows developers to get a clearer scope of essential features and the customer gets a better knowledge of what will be delivered, with accurate estimates.

Nitor excels at streamlining and operationalizing BDD Based Test Automation through its ready-to-use frameworks, successfully employed strategies and efficient use of tools/technologies.

If you are interested in finding out more about BDD, write to us at marketing@nitorinfotech.com

Boost your business foundation with Microsoft Dynamics xRM

Regardless of what industry your company works, clients are your most vital resource and handling those client relationships is the establishment for developing your business. Additionally, plenty of the organizations look for CRM to manage sales, customer service and marketing. A CRM (Client Relationship Management) software can help gather, sort and deal with the majority of your client information, and is integrated from finance to operations.

One such CRM, Microsoft Dynamics, is one of the most popular tools in the market. Not only does it meet the needs as well as the budgets of smaller, middle-sized and large organizations but it also makes marketing more effective and assists you in getting more out of your customer relationships. Furthermore, Microsoft Dynamics CRM offers the flexibility of both on-demand and on-premise deployments. Additionally, the powerful CRM program offers unparalleled integration with Microsoft Office suite, Microsoft SQL Server, Microsoft Exchange Server, and Microsoft SharePoint, some of the most widespread applications in the business world.

Do you need a Software that is a Step ahead?

A term often associated with CRM, with a twist- is– ‘xRM’ or ‘eXtreme Relationship Management’. xRM is nothing but an extension to CRM, if your organization deals with policies, property taxes, building assets and list goes on. With xRM you can manage the relationship of anything within your company. Additionally, xRM is ‘extended Relationship Management’, which represents the extension of CRM platforms allowing organizations to thrive by helping them manage employees, process, suppliers, assets and much more.

An XRM has a several key components, which can give a strategic approach to building a unified system that connects all aspects of a business together. Following are the XRM components:

1. Entities & Records

2. Fields

3. Forms

4. Web Resources

5. Workflow Processes

6. Plugins

7. Web Services

As you can clearly see, the above components are essential to manage xRM. However, the question remains is, is it useful to deploy a solution like xRM? Will organizations reap any benefit out of it? Or is it just a fad? Well, to answer that honestly xRM is necessary if you already have the CRM within your organization. It has several crucial advantages, which can be vital for developers as well as for organization.

What is in it for Developers/Organizations?

These days there is little time to write a lot of custom code to deliver solutions. With xRM, developers can aim to develop applications rapidly. To meet requirements for business applications, xRM has a framework that provides the agility and flexibility to adapt to changes and get user acceptance and adoption.

From an organization’s point of view, when you take Dynamics 365 and utilize it as a stage for building an XRM system, you get a rock-solid foundation on which to build ‘line-of-business’ (LOB) solutions. Everything can be tailored according to your company’s need and incorporated smoothly with other critical systems.

xRM solutions offer flexibility and customization to meet almost any business or organizational need. Integrating an xRM solution with the Microsoft Dynamics CRM will provide you with several important advantages.

Automation at its best – Microsoft Dynamics integration with xRM automates important tasks that employees would otherwise have to complete manually.

Rapid deployment – Developers do not have to worry about building an LOB software from scratch, as software plugins extend the functionality of the core Microsoft Dynamics CRM system.

Robust Security – Another key advantage is that xRM provides robust security features. It has security roles for users and objects that restrict access to sensitive data, SSL connections for data transfer, and more.

Native Integration – xRM solutions can connect existing systems to CRM, freeing data trapped in outdated systems. Microsoft Dynamics CRM also provides native integration with Microsoft SharePoint® and Microsoft Office® applications including Outlook®, Excel®, and Word.

We at Nitor take pride in our xRM solution capabilities. We specialize in xRM plug-in development, OOTB customizations and creating custom workflows to benefit your organizational requirements.

Find out how xRM would eliminate silos and build a unified marketing & sales funnel, write to us at marketing@nitorinfotech.com.

Dynamic Data Masking: It’s time to secure and transform your data

What is Dynamic Data Masking?

According to Microsoft Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. DDM can be configured on the database to hide sensitive data in the result sets of queries over designated database fields, while the data in the database is not changed. It does not encrypt the data, and a knowledgeable SQL user can defeat it.

In any case, it comes with a basic method to administer from the database, what information the different clients of a database application can and cannot see, making it a valuable tool for the developer. Having said the above Dynamic data masking needs a proper implementation. Let us look at how exactly the Dynamic data masking is implemented:

  • To implement DDM, you define masking rules on the columns that contain the data you want to protect. 
  • For each column, you add the MASKED WITH clause to the column definition, using the following syntax:

    MASKED WITH (FUNCTION = ‘<em><function></em>(<em><arguments></em>)’)

  • Dynamic data masking limits (DDM) sensitive data exposure by masking it to non-privileged users. It can be used to greatly simplify the design and coding of security in your application.
  • Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. This can be turned into the server as possible
  • DDM can be configured on the database to hide sensitive data in the result sets of queries over designated database fields, while the data in the database is not changed.
  • Dynamic data masking is easy to use with existing applications, since masking rules are applied in the query results.

To summarize, when it comes to sensitive fields in the database, a centralized data masking policy acts directly. Additionally, it assigns personal roles or users that do not have access to the sensitive data. DDM features full masking and partial masking functions, as well as a random mask for numeric data.

What makes Dynamic Data Masking Special?

As you can clearly see, the data masking practice is vital and can help address organization with data breaches. Here are some of the additional dynamic data masking benefits, which organizations need to look at:

  • Regulatory Compliance – A strong demand for applications to meet privacy standards recommended by regulating authorities.
  • Sensitive Data Protection – Protects against unauthorized access to sensitive data in the application, and against exposure to developers or DBAs who need access to the production database.
  • Agility and Transparency – Data is masked on the fly, with underlying data in the database remaining intact. Transparent to the application and applied according to user privilege.

As you can clearly see above Dynamic Data Masking has number of benefits for organizations. Similarly, DDM can be an assest when it comes to Developers. Let’s have a look how Developers actually benefit from DDM

  • In DDM, simple and understandable rules are defined to operate on the data. The collection of these rules performs a series of known, tested and repeatable actions at the push of a button.
  • Data Masker handles even the most intricate data structures. It can preserve data relationships between rows in tables, between rows in the same table or even internally between columns in the same row
  • Data synchronization issues of this type can be automatically handled by the addition of simple, easily configured masking rules.
  • DDM works easily with tables containing hundreds of millions of rows.

 Conclusion:

Data security will never not be an issue; it will always be something we have to stay on top of.  However, with some of these practices in place we can avoid the at least giving the data away.

Information security is a never-ending issue; it will always be something we have to stay on top of. Dynamic data masking at least gives us a comfort zone where we can avoid at least giving the data away. Additionally, it minimizes the risk of accidental data leakage and dynamic obfuscation of sensitive data in the database responses.

Nitor’s Dynamic data masking services enables customers to focus on sensitive data elements in the desired databases. Our key objective is to provide customers with a working data masking solution while helping them establish knowledge and confidence. Additionally, we also believe that Dynamic Data Masking is complementary to other security features in SQL Database (e.g., auditing, encryption, RLS) and should be used as part of a comprehensive access control and data protection strategy.

To learn which implementation option best meets your organizations data masking needs please contact marketing@nitorinfotech.com

GitHub Acquisition: Reconciling GitHub with Microsoft

Microsoft’s most recent business move has shrouded the developer community in a state of wariness. GitHub, a popular open source code collaboration platform for developers and scientists (basically anyone working with data), was acquired by the tech behemoth for $7.5 billion. This figure represents an amount thirty times (!) GitHub’s annual recurring revenue.

Before the acquisition, GitHub suffered from multiple issues. These included serious monetary and leadership problems. GitHub narrowed solutions down to two options: the first was to hire a new CEO to streamline the company’s business direction and thus gain invaluable funding opportunities. The second option was to be acquired. GitHub chose this easier, faster path.

The optimal candidates to be acquired by were companies with access to large enterprise customers/subscriptions. This insight is derived from GitHub’s revenue model; GitHub is free for individuals but requires enterprise users to pay. Google, Amazon and Microsoft were among the

companies enticing GitHub with offers of acquisition. In the end, GitHub decided to go with Microsoft because of the tech titan’s more generous value offering. GitHub was also fully aware of Microsoft’s increased appreciation of open source (especially with Satya Nadella as CEO) and of its desire to show this to the world.

In this situation, Microsoft had an excellent opportunity to advance their own interests. First was the opportunity to show the world their transition from a propriety/monopoly based business to an open source model. Next was the all-important target of grabbing networking opportunities. Microsoft acquired LinkedIn in 2016, which enabled access to a network of professionals. With the acquisition of GitHub, Microsoft now has access to a network of developers. With this access to the largest pool of developer mindshare, they can compete with the likes of Facebook. With GitHub being the one of the largest code repositories, Microsoft can easily monitor new projects, interests, technologies, and market trends to stay ahead of the competition. Microsoft can also capitalize on an opportunity to woo developers more effectively by creating more offers and generating value, for example by creating attractive Microsoft based tool chains in open source to gain traction towards Microsoft technology. Lastly, the acquisition may have been an exercise in building strategic value. The strategic value – which pertains to how a certain company’s offerings help a different company (usually larger) to be successful – of GitHub, is essentially in the 85 million repositories and 28 million developers it hosts worldwide. It is not difficult to imagine the value of access to these developers, who regularly use GitHub’s code repository products, especially when the developers can be installed in Microsoft’s immensely profitable developer environment.

Microsoft’s long history of generally running counter to open source software, however, has led to lukewarm reactions from the developer community. Many developers feel that despite Microsoft’s attempts to foster acceptance toward an open source culture, Microsoft is not good for GitHub. This comes from GitHub’s initial premise of hosting distributed version control for remote coding for a flexible coding experience that could boost a developer’s community presence. Behind the developers’ tepid reactions are fears that Microsoft might leverage or co-opt their code for future products, or that the developers will be muscled into using only Microsoft products. Additionally, there are some direct conflicts between Microsoft and GitHub. For example, there are certain GitHub projects that are Xbox simulators. It is quite likely that Microsoft will kill these projects. There are even rumors that Microsoft may add tracking or advertisements to GitHub’s sites. There has thus been an upsurge in developers shifting their code to GitLab, one of GitHub’s prime competitors. In all fairness, however, this might be a reaction to a temporary fear.

So what is the future for GitHub? Looks like only time will tell.

How to Skyrocket Your Venture’s Funding with ICOs

ICOs (Initial Coin Offering) have gained tremendous traction in today’s world of digital currency. Built upon the security, trust and transparency of the Blockchain paradigm, ICOs have helped companies raise 7 billion USD as of May 1, 2018. This is a rise from 5 billion USD in 2017.These facts, coupled with the recent favorable economic climate, indicate that this the optimal time to capitalize upon the rising tide of cryptocurrency.

Read on to discover how and why you should raise maximum funds with this innovative business model.

Why ICO?

ICOs merge the power of crowdfunding with the allure of cryptocurrency.

In an ICO, internet users view your value proposition and invest in your vision by buying tokens. Note that this happens before the actual token-based marketplace is released to the world. The next step is a full exchange in which the issued tokens can be traded for other currencies. This structure motivates the public to participate in the ICO and own as many tokens as possible to gain on future enrollment into cryptocurrency exchanges. Because Blockchain technology underlies ICOs, users can be assured of security, transparency, and trust.

Building an ICO Platform

Nitor’s ICO platform follows certain best practices to ensure that your ICO is a success. First, all necessary ICO information is presented on an intuitive website. This includes token information, ICO duration, the beneficiary wallet address, and interfaces to popular cryptocurrency wallets. After this critical step, it is advisable to ask white-listed users to register and share their information so you are sure that every payment is legitimate.

During the ICO, it is useful to display the token status. This is usually shown as Total Tokens Sold vs. Total Tokens Allocated (known as a hard cap). It is also a good idea to have a token calculator, which shows the relationship between one token and a cryptocurrency such as Ether, Bitcoin etc. You will also need to display a transaction history. This is a list or record of transactions showing wallet addresses, amount invested, transaction costs and transaction signatures. This also helps in maintaining accurate records.

If issues arise, architectural modularity helps quickly identify and fix problems so that token sale can progress. An AML (Anti-Money Laundering) feature ensures that if, based on data analysis, you see an issue with a particular transaction post-payment, you can reclaim the issued token. Finally, remember that it is important to market your ICO. A strong integrated email notification engine automatically feeds ICO highlights to subscribed users. This has the potential to be used as a powerful marketing tool.

Ethereum-based Engineering Guidelines

Some helpful guidelines for Ethereum-based engineering include:

  1. Choose Modular HTML5 frameworks for front-end development as you may be looking to integrate an existing website instead of developing from scratch.
  2. Leverage the Truffle framework. This is useful for the creation of Smart Contracts.
  3. Follow the best practices in writing solidity files.
  4. Use a sandbox/test network such as Ropsten for integrated tests.
  5. Ensure 100% code coverage for all development.
  6. Modularize Smart Contracts for maintainability.
  7. Ensure that a third-party auditor, instead of the developers involved in writing code, conducts the security audit of Smart Contracts.
  8. Deploy your Smart Contracts to the Ethereum public network.

Nitor implements all these and more so that you can hold a profitable ICO.

Key Considerations

Before running your ICO, decide on a minimum funding goal (known as a soft cap). Complete the requisite research beforehand to understand what this number should be. Next, remember to avoid issuing tokens before the sale ends. This is important as record keeping becomes easier. Issue tokens only after the minimum funding goal is achieved and the token sale officially ends. If the minimum funding goal is not achieved, however, it is best to refund the money and modify your approach. With the guidance of Nitor’s dedicated experts, you can avail the benefits of a secure sandbox ICO platform to pre-test token sales.

Nitor’s services can help you at every stage of the ICO process. Nitor can get to the heart of the complicated code of Smart Contracts, leaving you free to strategize and innovate. We also help you drive innovation with 70% of ICO contract features already in place. With our knowledgeable teams, you can set the stage with your ICO website within two short weeks.

ICOs are one of the most useful, secure, and transparent tools for fundraising today. Nitor can help you leverage the brilliant power of Blockchain technology with the application of the aforementioned tips. With our experts, you can craft a brilliant strategy to generate the funding that your revolutionary product deserves.

If you would like to benefit from our world-leading Blockchain arsenal to raise funds for your venture, reach out to us at marketing@nitorinfotech.com.

Best Practices for Fixed Price Proposals

Fixed Price proposals are tricky to deal with. Any fixed price and fixed schedule proposal has the chances of budget and/or schedule overrun. This can affect the profitability of the project.

Best practices on writing the fixed price proposal to cover the risks are as below.

  1. Proposing a Discovery Phase where the clarity is around 70%. This helps in:

a. Locking the scope of the entire project, in the interests of predictability. A Flexiblity Matrix, for example, will help guide initial scope and planning discussions. A WBS (Work Breakdown Structure) can organize the work to be completed by the entire team.

b. Finalizing the technology stack

c. Finalizing the UI/UX

d. Finalizing the business rules

e. Finalizing the hardware/devices etc. to be supported

f. Creating a technical rapport with the customer team

g. The deliverables of the discovery phase should be as follows:

  • Architectural recommendations
  • Detailed user stories documented for the project along with acceptance criteria
  • UI/UX defined along with wireframes/mock-ups
  • Detailed release plan created along with sprints defined in all releases
  • Commercials & milestones defined as per the defined scope

2. Proposing an MVP-Minimum Viable Productwhere clarity is low

a. This helps the customer to gain confidence

b. A customer can approach the market and get market feedback

c. Customer can define the next plan of action

3. Proposing creation of a Proof of Concept (POC) for technical items for  the clarity of requirements if the  technical approach is unclear

a. This helps in finalizing the approach, which can be estimated

b. Customer gains confidence about the approach

4. Proposing multiple approaches:

a. Multiple technologies approach

b. Multiple timelines and budget approaches

c. This helps in gauging the customer’s budget and technology preference. In turn, the understanding evolves.

5. During estimation, appropriate padding/buffer needs to be added:

a. The padding or buffer should be over and above base development efforts. The development efforts should include the infrastructure setup, design, architecture and user interface efforts.

b. The Rough Order of Magnitude (ROM) estimate should be also be calculated. A ROM is a cost estimate provided for budgeting purposes. Seventy-five percent accuracy of a ROM is considered acceptable.

c. If the number of resources allocated to the modules of project is more than 5 per module, redundancy has to be considered

6. Mapping of customer’s objectives to our approach plan

a. List all customer’s objectives –technical, strategic, operational, go-to-market, process, etc.

b. Map these objectives and provide action plan with periodic reviews

 

7. Documenting the assumptions in detail

a. Assumptions are the basis of estimates. Therfore they must be documented in detail

b. Assumptions should be classified as :

  • Scope related – This are assumptions about the overall scope of project/engagement
  • Technical assumptions – These can involve the technology stack, integration with existing software, globalization, devices/browsers etc. supported, user interface, interfaces exposed etc.
  • Non-functional requirements related – These can involve security, performance, scalability, hosting etc.
  • Project Execution – Execution methodology, execution and documentation tools, collaboration techniques, points of contact etc.
  • Expectations from customer – Customer specific requirements such as infrastructure, VPN/network, software/hosting licenses etc. should be documented

8. Attaching appropriate case studies in the proposal – technical, domain, process-related

9. Having sync-ups with customers before the proposal due date to get as many details as needed

10. Change requests  are important for any fixed price proposal as scope creeps can affect the cost and/or schedule, leading to situations in which it may become necessary to re-baseline immediately.

a. Change request process should be documented. This includes:

b. Change management process for scope and schedule changes

c. Change request logs – documentation of change requests

d. Change request steering committee definition – this is for approval of CRs and escalation handling

f. Change request approval process

g. Procedure for change request addendum for cost and/or schedule changes

h. The proposal should also define the upper limit of the cost of a change request as a percentage of the overall cost of the project.

11. Providing an appropriate governance structure as per customer characterization

a. Strategic customers should have a fortnightly review with the steering committee, including the leadership team and a weekly sync up with the management team

b. Mid-size customers should have a monthly sync-up with the steering committee, including the leadership team and a weekly sync up with management team

c. Start-ups should have a heightened sync-up – twice a week reviews with the management team

12. Define payment milestones as below:

a. Kick-off should have a major chunk of payment – 30-40%

b. Milestones until QA should be defined as per actual resource loading

c. UAT and defect fixes milestones should be for 2-3 weeks and should have 10% final payments

d. Documenting all payment details – travel, payment realization etc. and appraising customers

Patient Engagement beyond Patient Portal

The healthcare industry has evolved at a rapid pace after the Affordable Care Act (ACA) was enacted in 2010. In the last 7 years, the dimensions of business and technology for every entity of healthcare were changed. The changes in the payment model (fee for value-based service) put the patient at the center  of this  horizon.

This helped to go toward achieving the ‘Triple Aim’ of healthcare:

 

Providers are facing multiple challenges while implementing the processes to achieve holistic population health management.’. Mostproviders offer a Patient Portal (a type of health portal) to their patients as a way of modernizing their patient engagement solutions. However, do  patients use the Patient Portal to the fullest? In our opinion, providers have a wide scope to implement new features to enhance patient engagement and optimally leverage their patient engagement strategies

Most providers jumped at adopting the Patient Portal to meet the MU1 and MU2 compliance requirements. However, the true benefits and potential of this patient engagement technology  has not been achieved. The patient and providers will experience the true benefit of the Patient Portal when proactive patient participation is part of the system. To achieve this,providers need to partner with patients at multiple levels. This will bolster patient participation.

Patient Engagement challenges in the existing ecosystem:

In the current healthcare ecosystem, most  providers are facilitate patient portals for compliance purposes only. If your Patient Portal is not utilized by the patient at the fullest, then the cost of the Patient Portal is an overhead for your practice. Beyond compliance, the organic usage of the Patient Portal by patients will be very helpful for practices.

Proposed changes in Patient Engagement Strategy

 

Patient Registration:

Patient registration, which needs admin staff,is always an activity for providers. . In busy hours, there could be more messy situations in which chances are high that admin staff would enter wrong patient information. This small mistake could lead to claim denial when the provider sends patient’s information to the payer.

Our Recommendations: Currently very few patient portals allow patients to pre-register before entering the hospital. This results in  extra costs the hospital needs to bear on registration staff. It also leads to chaos at hospital registration counters.. Patient portals should provide pre-registration to patients by entering all their demographic details and insurance information.

Patient Access:

In existing Patient Portal systems, the patient can view very limited and previously known information such asdemographic and scheduling .  The Patient Portal  should showcase the patient’s 360-degree view which gives a snapshot of actionable insights.

Our Recommendations: As mentioned above,  providers started providing patient information access to patients to meet MU compliance. Currently, a patient can view only limited information, which he/she already knows. The patient should able to view all his/her electronic health records, including episodic level medical history, real-time vital stats, and personalized patient education. This will be helpful to go towards patient participation. To implement these changes, wearable device integration and patient education material needs to be incorporated into the patient portal.

 Patient Participation:

The current Patient Portal gives very few opportunities to patients to participate in the care delivery process. One more reason for little patient participation is having limited information to view. If the patient can get access to more information, he/she might participate actively and get a better quality of care.

Our Recommendations: Currently patient participation is limited to scheduling. The patient should feel that he/she is a part of this system. This could be achieved by implementing new functionalities in a Patient Portal such as health related assessments at the time of scheduling an appointment, personalizing preferences for patient education, feedback for provider’s services, etc. In the long run, telemedicine could be a part of a Patient Portal to enhance patient participation.

Patient Conferring:

The goal of the Patient Portal should be the proactive participation of patients in the care delivery process. This will enable physicians to get more patient-centric data, which will be more insightful while generating actionable data. Currently, patient participation is limited to scheduling or sharing medical reports, etc. Sharing behavioral trends/changes and an intense feedback system will be helpful in  managing chronic diseases or for long term coordinated care.

Our Recommendations: The next level of patient participation will be patient conferring. Currently, very few patient portals allow the patient to participate in the care delivery process. The proactive participation of patients will be very  helpful for physicians to arrive  at an exact conclusion in a  short period. A continuous feedback system, sharing emotions,  behavioral patterns, and healthcare gamification scores are some  things which will enable patients to proactively participate in the care delivery process.

Patient’s Wellness:

In the last couple of years, wearable technologies have emerged. These can disrupt the growth of patient engagement. Tracking real-time vital stats via, for example, telehealth can be easily used to move from illness to wellness.

Our Recommendations: We feel that patient engagement can be achieved at its peak only if both clinical information and wellness information used in an appropriate way. Wearable device integration, real-time activity tracking, personalized and team wellness goals, rewards for goal accomplishment, etc. are viable avenues for investment. This type of wellness information will be very helpful for physicians to ensure quality of care and  clinical information.

As we know, a Patient Portal is not  the only tool for patient engagement. However, we feel that a Patient Portal is a platform in which most of the other patient engagement tools or ways could be merged seamlessly; for example, telemedicine, wearable data integration and much more.. We feel that ‘Patient Portal 2.0’ is an opportunity for providers to take their practice to the next level by offering an integrated patient experience.