Over the past decade, Robotic Process Automation Technology has clearly emerged as one of the key levers to expedite Business transformation journey for many organizations. While most of the critical RPA product players are busy selling over-simplification of the Technology leveraging visual programming approach, it makes sense to evaluate the outcome holistically on usage of common IT principles.
“Any non-IT professional can build the RPA BOT using our platform”. This sounds like common pitch across RPA products but then, Why RPA has got significantly high failure rate? Why it is difficult to sustain the RPA benefits for longer time? Why RPA BOTs continue to be a security threat in over-all IT environment? Industry took these questions very seriously and started working towards creating a common framework to ensure quality BOT outcome.
Irrespective of RPA products, most of these standards and best practices can be clubbed under the following FIVE categories,
- Readability – Ease of understanding the code by having elements like naming convention standards & compliance, zero usage of junk code, componentization and simplified logic.
- Configurability – Ease of managing changes and BOT calibration by having inclusion of configurable parameters like performance parameters, URLs, Files & folder paths, email Ids, credentials, business rule threshold parameters, log messages, email formats, etc. and generic design.
- Reliability – Degree of accuracy with minimal exception rate by having elements like robust exception handling, usage of best possible interaction technique, memory leakage avoidance, and appropriately designed auto-recovery and auto-healing mechanisms.
- Security – Degree of freedom from known and unknown threats by having elements like authorization & authentication, credential management, and business data storage & sharing.
- Performance – Minimal average handling time by having efficient delay management, parallel execution, usage of optimal interaction technique, efficient memory management and efficient business logic configuration.
One of the most important aspects of maintainability where any qualified Professional should be able to understand the BOT code with ease.
Naming convention and compliance – Apart from meaningful names for the code objects like Workflow, Functions, Variables, Arguments, and Activities, the key here is to define the standard naming convention and most importantly follow the same post that. While camel casing is the most commonly used pattern which fits in well with most of the RPA products, it is suggested to define prefix or suffix depending on the RPA Product and code object types. For example, in case of Arguments, input and output type can be defined using prefixes like “in_” & “out_”.
Commenting – It is always debatable to assess the appropriateness of commenting. The key here is to strike a right balance between too much and too little commenting. Thumb rules can be defined to enforce comments before every activity containing one or more activities. Few RPA products provide annotation functionality and in all such scenarios, it is suggested to prefer the same over comments as it reduces the number of activities and enhances code readability. Additionally, it is suggested to provide comments before every core business logic block.
Zero usage of junk code – It is utmost important to remove all the junk code like unused code objects or disabled code, primarily from the readability perspective and to avoid unnecessary memory blockage. As there is no impact on the BOT functionality, most of the Developers community happily ignores this. Few RPA products are also providing features to auto-remove the junk code, mainly the variables. One should always look for these features and actively use the same.
Componentization – We all very well understand the old saying, “Divide and Rule”. The same applies here too but the trick lies in the way we divide or componentize the code. Choosing the right dimensions of componentization depending on the process, is the only key to success. Few dimensions to look-out for,
- Business process centric – Process, Geography, Organization, Client, etc.
- Application centric – Application, Screen, Database, Webservice / API, etc.
In summary, whichever part of code seems to be reusable, vulnerable toward changes in future, or falling under different milestones of deployment road-map are the ideal candidates. Another quick check for optimal level is number of activities or actions inside the workflow. While there is no fixed standard here, but it is suggested to be watchful beyond 100 activities.
Code complexity – Implementing simplified logic is the key here. Few items to watch-out for,
- Optimal usage of variables and arguments.
- Avoidance of nested ifs and loops.
- Usage of right activities or actions for a given logic based on their availability in RPA Products.
Considering dynamic business environment, CHANGE is the only constant available to us. The same can be managed by implementing generic design but then here again optimal level of generalization is required to be maintained to balance implementation effort vs change management effort. While there are many possibilities here, but following are few must-to-consider candidates for configurability,
Folder Structure, File Path, etc. – First step certainly is to make them all configurable but then even while configuring, if relative paths are provided then it would be helpful to deal with system level directory changes.
Application path, names, URLs, etc. – While they all look like clear constant parameters, there are situations where it differs from environment to environment, geography to geography, and many more. It is thus always suggested to define them as configurable parameters.
Email ids, templates, etc. – Most dynamic parameter and thus ideal candidates for configurability.
Functional values – This is slightly tricky one as it completely depends on the process to be automated. It is suggested to identify all such functional values which are vulnerable towards change. Sometimes, historical analysis also provides us good insights. For ex. If Invoice amount is greater than 1000 USD, follow process 1 else follow process 2. In this case, 1000 USD is clearly the candidate. In the RPA world, typically this type of requirements gets missed during business analysis and thus Developer should ideally be identifying and verifying all such scenarios.
Performance tuning parameters – The technology is having a very strong prefix called “Robotics”, which clearly indicates the need for BOT to work in different environments. Robot calibration is the key here. For ex. Test environment of source applications are typically slower than that of production environment. In this case, if all our performance parameters like wait, timeout, etc. are configurable, then the BOT can easily be calibrated with the production environment and will help maintaining the over-all lower exception rate. This is one of the most important aspect of configurability which often gets missed by large Developers community.
Error messages – As part of good practice, it is suggested to have all the messages configured at centralized level, mainly to accomplish messages standardization across the BOT.
As RPA BOTs are intended to work without major human intervention, it becomes important to have high reliability factor associated with it. With the evolution of RPA products, this is no more an unachievable target but then the key here is right level of product configuration. Here are few factors to watch-out for,
Exception handling – It is always an objective of any RPA BOT to process the transaction with 100% accuracy with minimal exception rate. Even if BOT fails to process the transaction, it should be sending the same as exception for manual intervention. Also, as BOT is going to deal with many unknowns during real-life scenarios, exceptions are reality but then the BOT should be in a position to have a graceful exit. In order to deal with this situation, it is important to have robust exception handling approach identified at the beginning itself. The complete code should be covered under Try-catch code construct with all known exceptions identified as business exceptions.
Choosing right interaction technique – This is no more an art to identify the most suitable interaction technique in any given scenario. Following the preference metrics will help over here. Most preferred one is the back-end / Service level integration. Then comes UI / Script level integration and then finally if nothing works then go for surface integration. Surface integration is most vulnerable towards future changes in source applications and leads to least resilient BOT. Often, we observe that Developers get into the surface integration technique if not able to freeze on UI integration, without investing enough time to investigate and identify the suitable selectors. Thus, it becomes necessary to scan all such scenarios through highly proficient resources.
Safe exit – Closing all the opened applications and freeing up memory at the end of each BOT session is the key over here. It is suggested to explicitly check on closing all such instances.
Windows and Field handlers – Identifying the most suitable selector is always a challenge. Often, we observe that the given selector is working fine may be today but fails tomorrow. Thus, it is necessary to make them robust and reliable to the best of our abilities. Making them dynamic by testing them on all possible scenarios is the key here.
Surface automation – Even this being least preferred approach, it is the reality for few scenarios. Thus, it is important to look for all best practices associated with it. For ex., Maximizing window before start, Choosing relative coordinates, etc.
Almost all the RPA products are equipped with very high standards of security but with a disclaimer that security aspect of the BOT will be dependent on the way it is configured. We understand their position as they can secure things falling under their scope. Thus, BOT security is NOT only dependent on RPA Product but mostly on the way it is configured.
When it comes to security, apart from Infrastructure architecture, User management, Access management or Deployment strategy, the two most important aspects to be considered while implementation is, (a) Data storage and (b) Data sharing.
Data storage – This essentially talks about storing only required and legitimate data both inside the code and in the logs. We need to watch-out for the following pointers,
- Avoid hard-coding and make the required data like email ids, credentials, file paths, etc. as configurable.
- Check on the data getting stored in the logs. While it is permissible to store transaction ids to have better traceability, it is also necessary to ensure that the right approvals are in place to do so. Typically, we should not be storing any other business data in the logs.
Data sharing – This is where we see the maximum risk in terms of security. If email ids, application executable paths, folder paths, etc. are hard-coded or may be hiding somewhere inside the code then there are high chances that the BOT will end up sharing the business information which it is not supposed to share. It is suggested to check for all such instances to be doubly sure before putting the BOT into production.
As measuring of performance and productivity of human workforce is important, it is equally important to measure the performance of RPA BOTs too. Any improvement in average handling time of the BOT will directly enhance the return on investment (ROI) by saving RPA product license and infrastructure cost. Thus, it becomes necessary to focus on this aspect.
There are lot many similarities in performance of human workers and RPA Bots. Human workforce is trained, monitored and re-trained on continuous basis to achieve optimal performance and productivity. As, HR managers defines Key Performance Indicators (KPIs) for measuring performance and productivity of human workers, IT and Business teams needs to define key performance areas for RPA bots to ensure the stability of bots in production.
Delay Management – Delay management in RPA Bots refers to effectively identifying and reducing unwarranted delays induced in RPA Bots execution through (a) Design and architectural considerations (b) Coding practices and (c) Infrastructure and network setup considerations.
- Design and architectural considerations – IT and business RPA design teams should take into account the efficient practices while putting together the design and architecture of RPA bots. Design team should accurately estimate the time required for given activity as bot performance could vary based on volume of data being processed e.g. designing Bot topology based on volume patterns, scheduling patterns, application availability windows etc.
- Coding practices – IT and business teams developing RPA BOTs should thoroughly review the code to ensure following activities and properties are used appropriately – usage of static delays, usage of delay before, delay after, time outs, delay between keys, usage of duration properties and wait for ready activities etc.
- Infrastructure and network setup considerations – Infrastructure sizing and network setups should be efficient enough to prevent network and infrastructure latencies impacting RPA bot execution. Differences between development and production environments, screen resolutions can impact BOT’s stability.
Parallel Execution – Parallel execution with regards to RPA bots refers to (a) developing bots to execute process steps or activities or actions in parallel instead of sequential way, (b) executing production bots in parallel to ensure optimal utilization and volume sharing.
- Developing bots to execute process actions in parallel – Development and QA teams should review RPA bot code logic to ensure that actions that could be executed in parallel are coded accordingly. E.g. fetching data from excel and populating in a form of an application – RPA products such as UiPath enables to achieve this in sequential way where each field is populated one after another as well as parallel way where all data items could be populated in a form in one go using Parallel activity. Development team should carefully review such scenarios.
- Executing production bots in parallel – Architects and production bot management teams should ensure the bot topology to run bots in parallel ensuring the optimum utilization of bots in production. Identify different components of automation and their interaction style to accomplish maximum bot utilization and meet process level SLAs.
Interaction Technique Usage – As we talked about this concept under the Reliability umbrella, the same concept is applicable in case of accomplishing high performance of the BOTs. The impact on average handling time is significant based on Interaction technique usage. Selection of right technique can improve the performance by 5 to 10 times.
Memory Management – Memory management in RPA world refers to mechanism to be considered to eliminate over usage of memory resources by bots and eliminating memory leakages. Development teams, while configuring the bots, should pay attention to actions such as same file opening multiple times, frequent opening and closing of files, not closing browsers and DB connections explicitly etc. to manage memory resources efficiently. Production monitoring teams should have scheduled maintenance activities including re-starting of bots at desired frequency which will enable releasing of resources such as memory, not closed browsers, not closed DB connections if any.
Development teams, while configuring the bots, should keep in mind to develop bots using best suitable out of box features and activities provided by RPA products, avoid developing custom activities for already existing product features, divide process steps such a way that would enable use of shorter, simple and efficient logic in place of lengthier and complex one. Avoid hard coding of values and data in the code. Choose optimal level of logging.
CEO & Founder, BOT mantra
Delivery Head, BOT mantra