Kreeti Technologies Celebrates Excellent Customer Service!

If you’ve ever considered an offshore development partnership, you know that there are a lot of options to choose from. In fact, India alone makes up 55% of the global IT industry.

With so many talented firms to decide between, it can be tough to know what separates the good from the great. That’s where Kreeti Technologies comes in.

Kreeti’s technical team has expertise in niche technologies likes Ruby on Rails, Elixir, Phoenix, Kotlin, ReactJs & React Native etc.

Since we opened our doors in 2006, we’ve been working hard to deliver solutions that stand out from the crowd. That means we don’t just provide top-rated design and development services — we go above and beyond for our customers.

We tailor each engagement to all of our unique clients and their needs. The result? We’ve received rave reviews from satisfied customers around the world.

For instance, we recently received feedback on Clutch, a hub for B2B ratings and reviews. In the engagement, we were responsible for ERP development for an international trading business.

Our client was thrilled with our system and the impact it had on their internal efficiency.

We have gone live with our system and can see that we currently have saved two man days per week. This equates to one full time employee. By the time we finish the phase 2 we anticipate richer communication and save at least three FTE.” — Director, International Grain Trading Firm

In another project highlighted on the platform, we provided development services to a SaaS company. Our team’s professionalism and reliability made for a smooth and successful engagement.

I was impressed with their deliveries and the ability to fix bugs whenever they occurred. Over time, I was confident that if there is any issue whatsoever, the team at Kreeti will take care of it. This is a big relief for any team leader. In short, it means I can sleep well.” — Founder & CEO, IndustryPrime

You can also find us ranked on The Manifest, another platform in the B2B space that includes lists of leading agencies and showcases projects. Check us out!

If you like what you see, feel free to reach out to us at any time for your next software development needs.


Code Evaluation/Testing Service using Docker API

Docker has been a game changer and has revolutionized containerization technology.

We are aware of the benefits of docker and how it helps us to run light weight containers easily without having to go through the hassles of using heavy virtual machines.

Most of us use the docker client to talk to the docker daemon, while this works for most of our use cases, did you know that you can also issue docker commands programmatically using the Docker API.


Docker provides an API to interact with the docker daemon, as well as SDKs for Go and Python.

The Docker Engine API is a RESTful API accessed by an HTTP client such as wget or curl, or the HTTP library which is part of most modern programming languages.

So, if you want to issue commands to the docker daemon, you can do so programmatically using the API.

For example, to get a list of all running containers you can try…

curl --unix-socket /var/run/docker.sock http:/v1.24/containers/json

This will give us a list of all containers with various information in json format.

Use Case?

So, what can be the use cases for the Docker API, well if in any case you want to issue commands to the docker engine programmatically this API can help. 

One such use case could be running arbitrary code entered by the user in an isolated environment via docker.


Suppose you want to make a code evaluation service for a coding website. People can upload their code and this service will run and test the code against some test cases. However a person can upload some malicious code which can be potentially harmful for the system. 

To avoid this we will run the code inside docker containers to safely isolate and safeguard our servers from any potentially malicious code.

An Example to run arbitrary code inside docker container using the Docker API in elixir

The following is an example of how you can easily build a service like this in elixir, 

We will use HTTPPoison which is a HTTP client for Elixir to request the docker API.

Set the base url for request the docker API:

 # Base URL for docker unix socket  unix:///var/run/docker.sock

@base "http+unix://%2Fvar%2Frun%2Fdocker.sock"

Create a container, this will return the id of the container:

%HTTPoison.Response{body: body} =!(
       "Image" => image_name,
       "Cmd" => ["bin/bash", "-c", "tail -f /dev/null"], # keeps the container running
    [{"content-type", "application/json"}]

Start your container that you have created using its id:!(

Create an exec instance:

Here command is any command that you want to execute inside the container

For example:

The following json command string compiles a c code:

{\"AttachStderr\":true,\"AttachStdin\":true,\"AttachStdout\":true,\"Cmd\":[\"bin/bash\",\"-c\",\"gcc -std=c99 -o a.out main.c\"],\"Tty\":true}!(
  [{"content-type", "application/json"}]

Start the execution of instance you have created!(
  [{"content-type", "application/json"}]

Stop the container!(

Delete the container


The above example demonstrates how you can easily spin up docker containers to run arbitrary code safely isolated from your server.

Using this docker api you can run almost anything that you can using the docker client.

I hope this blog helps you someday if you encounter a possible use case for it.

Good Day ! ?

rails refactoring ruby

Refactoring Legacy Rails Application

Many times working as a developer we are confronted with having to work on legacy applications which may be poorly written or architectured, or have just aged, and not been properly maintained.

Many of these applications may be very vital and critical to the business, or many times the entire business of the organisation depends upon that one application. In either case we need to move forward, and cannot let things fester. Not updating the application may have many significant impact such as: 

  • Security risks as the programming language version and the framework version in use, may no longer be supported.
  • Various Performance issues may be happening as the application is no longer able to handle the increased traffic load, and increase in database size.
  • Data consistency issues may be happening as the application and database architecture doesn’t exactly capture the business requirements, as the goal post has moved.
  • Application errors and frequent bugs because of the layers of code which are there makes the application harder to understand and manage.
  • Database timeouts, deadlocks, race conditions.
  • Large memory footprint, CPU spikes, etc.
  • Integrating new packages or libraries which require a newer version of the language and/or framework can be challenging.
  • Decreased developer productivity, as developers cannot use the improved or better paradigms of the language and/or framework.
  • Harder to onboard new developers, because of the complexity and/or archaity, the application becomes too hard to understand.

Don’t blame yourselves if you find yourself in this situation, many top and well maintained OpenSource projects have found themselves in this situation and had to reinvent themselves. Mozilla has to do this with Firefox and other projects being managed by it, Openssl is being reinvented as libressl, Gnome has done it a few times, and there are others. The bottom-line is “change is the only constant”.

In this article we will talk about what are the common ailments affecting such applications, and what are the long term as well as short term strategies which can be used to modernize the application. So, let us try to enumerate the common pitfalls which affect legacy codebases that makes them hard to work with:

  • Use of old syntax, or now deprecated language features.
  • Use of old libraries which are no longer being maintained.
  • Lack of test coverage, or sometimes tests are there, but they may not be comprehensive or covering the entire stack, especially the frontend.
  • Database de-normalizations, bad Database hygiene
  • Layers and layers of code superimposed on top of each other. Several control-flow (`if`) statements.
  • Repeated code patterns where different parts may be having hard to spot differences or edge cases.
  • Gigantic application size, or lines of code.
  • Messy class hierarchies, or huge classes, methods which are hundred lines long.
  • Fails the code smell tests, or has anti-patterns.

Most businesses operate under tight deadlines and fiscal constraints, and the product gives more importance to feature development or functionality improvements, than to code refactor or clean-up of technical debt. So, it is very important for the Engineering leadership of the company to underscore the importance of continuous refactoring and taking care of technical debt. Just as a product or functionality needs to be continuously tinkered with and improved, similarly the application code needs continuous overhauling to be in good shape. So, how can one go about fixing this? 

One simple strategy is to completely rewrite the application. You are starting afresh, so you can take a modern stack, follow newer and better practices, and your work doesn’t impact the existing application. The users are just expected to switch from the old application to the new one. This strategy works if your application is small or moderately less complex. But for any reasonably sized or complex application it can be a mammoth undertaking. Just because it is a modern stack, doesn’t mean that the developers have full familiarity with it, they may have to learn / unlearn many things, mistakes can be made. Replicating all the functionality of the existing application with its nuances, will require strong and complete understanding of the old application, and all that knowledge may be hard to acquire. Also, the cost in terms of missed opportunity may be large for the product / business team, or two different engineering teams may be required, one which continues to work and maintain the old application, and another which builds the new application. Using this strategy can be high risk and requires a detailed cost and benefit analysis.

An alternative strategy is to go for progressive and gradual refactoring. This is what is usually better suited for large and complex projects, and provides for incremental benefits, and hedges the risk. We are going to focus on this next, and discuss various approaches to solving it.

  • Selective rewriting – There may be parts of application which are good candidates to be moved to an external process or a micro-service. For instance, image processing can be easily moved to an AWS Lambda or Google Cloud Function microservice using the Serverless framework. Similarly, HTML to docx and PDF conversion, ElasticSearch / Solr indexing, report generation, etc. This is application specific, but all large applications will usually have such components which are very resource intensive, and can be easily made an external modern service, and hence reduce the footprint of the legacy application.
  • Integrate Code Syntax Checkers and Static Analysis tools – Integrate and configure the application to make use of tools like Rubocop, ESLint, Reek, etc. They should be well integrated with your application code review and CI pipeline. All Pull Requests (PR) should be checked and made sure that they pass the code quality check. It is tempting to run them and try to fix all the issues discovered in legacy application at one go, but the number of changes required can be very large, making the PR unwieldy, hard to review and risky to merge. Also, it may be a lot of time investment to fix all the issues. So, better to create separate tasks for each of the issues discovered, and gradually fix them over a period of time.
  • Cleanup Database Schema – Ensure that columns which should be not null, are flagged as such. Remove old and unused columns and tables. Try to remove *_cache columns which may be hard to maintain, but provide little benefits. Usually with some clever programming, the cache columns can be removed with no performance drawbacks, but with benefits such as avoiding multi-table locks and writes. Similarly try to remove other unnecessary database de-normalizations. It is not for nothing that premature optimization is looked down upon. Removing database de-normalizations and other unnecessary convolutions can simplify the application database architecture, as well as help in a lot of code cleanup.
  • Remove Dead Code – Remove commented or unused code. If it hasn’t been used for years, no one will use it in the future. In any case it can always be easily retrieved from the version control.
  • Track Everything in Refactor Stories – Any change which needs to be made should be all recorded in stories in the issue tracker. This will help the team in prioritizing them, estimating them, and this can turn out to be very helpful in deciding what should be done and when should be done. It will also help the product and business team to get clear visibility into the process, and also will help the engineering team in negotiating sprint story points for the technical cleanup related work.
  • Periodic Independent Review – It is always nice to have a fresh pair of eyes going over your application’s code and architecture. Teams tend to indulge in groupthink, so having an outsider’s opinion and review is always beneficial, and good for long term health.
  • Slow and Steady Improvements – The team should continuously try to strive for improving the quality of the codebase and application in all the changes which they make. Every PR should increase the code test coverage, reduce code complexity. There can be few dedicated sprints points in every Sprint to work on the technical debt related issues.
  • Refactor to DRY code patterns – An analysis should be done to find different parts of the application which have similar structures. Then after a thorough understanding of those areas have been developed, the common patterns should be extracted out to a shared module such that they can be easily reused, and are more extensible. These duplications can be in either backend or frontend code, an attempt should be made to reduce both with an equal zeal, with the objective of reducing the number of lines of code and simplifying the application.
  • Upgrading the Stack – A precursor to this is having a good test suite, or developing sufficient QA muscles by writing test cases, and have team members well experienced with the various functionalities of the application. This can then be followed by making sure that all the deprecation warnings are removed, and then going through the various inbetween releases to upgrade the app, while ensuring compatibility at the same time.
  • Patterns to Refactor – There are Rails/web application design patterns like Service Classes, PORO, FormObjects, Decorators/ Presenters/Serializers, etc. which can be used to simplify the application architecture. 

Refactoring is an important part of the application development life cycle, and I have tried to capture here some of the important points to keep in mind while refactoring a legacy application. Please feel free to share what you think in the comments below.

Surendra Singhi is a technical Consultant cum Coach who helps organizations improve their Software Development practices to deliver great products and services by refining their processes, workflows and technical strategies. He works for Kreeti Technologies, and has mentored several development teams to better realize their potentials and deliver increased value to the stakeholders. 

We at Kreeti Technologies provide the expertise to upgrade teams and take their products toward success. Do you have an app where you need help with refactoring, or feature development or scaling it up? Contact us at:


Employer Brand Management

Bringing Brand to Employment

Delivering a consistent and distinctive customer brand experience has always been a central concern of brand management. Approach to brand management was first introduced by P & G in the 1930s and mostly dominated the fast-moving consumer goods. When Philip Kotler first suggested the 4 Ps as a platform for marketing management, it was clear that he had product brands in mind. However, with advent of time its scope entered service market also, spreading its roots from banking to tourism to hospitality. Thereon, as ages passed it further extended its presence in the recruitment business also, innovating the whole new world and term of employer branding and employer brand management.

Employer brand management is a new approach to people management. It is a term commonly used to describe reputation as an employer, and its value proposition to its employees, as opposed to its more general corporate brand reputation and value proposition to customers. The phrase ‘The Employer Brand’ gets over 400,000 Google searches per year and is known to management teams globally. Just as a customer brand proposition is used to define a product or service offer, an employee value proposition or EVP is used to define an organisation’s employment offer. Likewise the marketing disciplines associated with branding and brand management have been increasingly applied by the human resources and talent management community to attract, engage and retain talented candidates and employees, in the same way that marketing applies such tools to attracting and retaining clients, customers and consumers.

2. Role of Employees in creating Brand Differentiation

The global market in itself pronounces satisfied employees are more likely to deliver a consistently positive service experience, and prove to be the best reference group for market expansion. They are evident in not only increasing the sustainable service brand differentiation, but also evoking a particularly distinctive style to service. It is generally agreed that these intangible brand characteristics are far more difficult for competitors to copy than the operational components of a service brand experience. Functional differentiation is still an important factor in driving competitive advantage, but the lead time before one is copied by a competitor has become increasingly narrow. So, even if one is creating a completely new operating model, it is only a matter of time before competing companies, start beating with their own game.

3. Attributes effecting Employee Branding

Employer branding is an organizational culture which compositely represents the employer loyalty and employee productivity. However, it gets affected by various internal and external factors like a product brand. A few of the major attributes affecting its success have been discussed under:

  1. Basic Job Benefits: It covers all those aspects which correlates the HR and marketing department of an organization. It focuses on the job description details, salary and bonus figures, other job benefits and fringe benefits and the contribution of the organization in maintaining work life balance and stability.
  2. Overall Status of Company: It is a factor which analyses the position and eminence of the organization. This is mainly checked on the basis of the company’s reputation, market size of the company, organizational structure, company’s innovation and the company culture.
  3. Competitiveness: The competitive power of the organization determines the competitive ability of the employee also. Therefore, the competitiveness is determined on the basis of the intellectual challenges available in the job, chances to work independently and the chance to emerge as a leader or manager.
  4. Self-development: Self-development is an attribute which plays a major role to determine the employee loyalty. The better a company provides chances of self-development, the greater are the chances of employee retention. It can further be determined by the factors like possibility of continuous learning, chances to avail expertise, and chances to develop one’s skill.
  5. Future Opportunities: The futuristic approach of an organization determines its success. Any employee is attracted to a company only when it gives a platform to have strong future opportunities. These opportunities can be good reference for future work and possibilities to work abroad.
  6. Psychological Balance: The ability of an organization to provide psychological satisfaction to be connected and being recognized by a brand name is the pivot of employer branding. Factors like pioneering as a brand, the feeling of being a positive contributor to the company, and expected cordial relation with superiors and peers are the major determinants of this attribute.

Any organization that has the capacity to effectively manage these attributes shall undoubtedly emerge to be an eminent model in the world of employer branding.

4. Conclusion

Making a link between brand, culture and customer experience is not new, but the practice of managing the link between these related domains has evolved significantly over recent years. In many respects, the notion of employer brand management simply completes a journey that began to maintain sync between managing employees and the brand name of the company. Employer brand management just provides a mechanism for translating the brand ethos into the everyday working experience of employees, in order to reinforce the organisation’s ability to deliver consistent and distinctive customer brand experience starting from its home itself.


csv2sql – Blazing fast Elixir Csv 2 Sql loader

Loading csv files to a database has always been a challenging task, without the database tables already created one has to go through the tiring process of manually inspecting the CSVs and creating the DDL queries to create the tables before the data can be loaded. 

Again, when dealing with huge csv files, the process might take a very long time to finish.

This is where csv2sql come into the picture.

Csv2sql is a blazing fast tool written in Elixir for loading csv files into the database.

Csv2sql can automatically infer the basic types of data stored in your csv file and create the database tables and load the data all with one single command.

Got a directory full of large csvs that needs to be loaded to a database?

With csv2sql it’s as simple as running a simple command like..

./csv2sql --source-csv-directory "/home/user/Desktop/csvs" --db-connection-string "username:password@localhost/my_database"

That’s all ! Csv2Sql will analyze the files and insert them in to inferred data tables.

After it is done, it will give you a summary of the results.

Is it fast ?

Yes, csv2sql is made using elixir which makes the most of your cpu power by processing the csvs parallely thus greatly reducing the overall time taken, so it’s perfect for huge csv files.

Dependencies ?

Csv2sql only require erlang to be installed, that’s it. 

Find the csv2sql executable here.

Finally if you want to know how it works or if you are interested in elixir or genservers, find the repository here, also this project is open source and all your contributions are welcome.

Good Day !


Entrepreneurship: Approach and Strategy

Minimum Viable Product

Dr. Rajiv Sikroria 
(Training & Placement Coordinator)
Sunbeam Women’s College Varuna, Varanasi

Entrepreneurship is not only about undertaking risk, but undertaking risk in calculated way. We as an entrepreneur often are swayed away by our Business Idea. We want to carry on the Idea. In such cases, entrepreneurs fall in the lap of luck, How far one can succeed without understanding market pulse? How can an entrepreneur serve people without understanding what actually is needed in a market? Above all, whom is he going to serve?

In the marketing management, we have studied consumer is the king. How can you serve the king without understanding and analyzing his behavior? What is his Minimum expectation you’re a product? Let’s talk what Minimum Viable product (MVP) stands for?

Minimum Viable product is defined as the minimum expectable features from the product. As a consumer, if you want to purchase a mobile set for you, what do you expect from mobile? What are your minimum expectations from it? You may say, Good Random Access memory, more battery back, HD Quality,  Gorilla Glass,  128 GB inbuilt memory,  and high resolution camera and so on.  As an entrepreneur if you fail to understand minimum viability of product, you will fail fiercely. Before heading towards market, at least you need to jolt down minimum expectations out of the product.

But, who should be involved in analyzing MVP for your product? Is it entrepreneur? Marketing team, sale team, Research and development team?

To answer this, we have to understand our consumer base, they are the right person, who can answer adequately, and they can suggest us what are minimum expectation from the product. You can include your marketing and R & D team for better understanding.  This will only be fruitful whence; we have already identified our niche and within it pointed to micro niche. Now, only thing is to sort out how can be meeting these minimum expectations.

Now as an entrepreneur, you must have consumer understanding and adequate STP (Segmentation, Targeting and Positioning strategies) for your product.  You need to identify micro niche for your product, you may think to v=cover whole market, but is it feasible for you? Anyone want to have whole market coverage and high market share, irrespective of what are his capabilities in terms of market understanding, consumer analysis, product innovations and features to serve.

Now you may say, consumers want sky in less money.  So you revisit MVP and drop those expectations which were valued less by large number of consumers and surface out those features which had been reached by consensus. Indeed Minimum viable product analysis requires multi layer procedure to execute but new entrepreneurs can resort with two or three most expected features in the product.

At this initial level, I can would suggest that , with Minimum viability of Product, entrepreneurs success rate is really high.


Rails Paperclip Image Storage – Redux

Paperclip is probably the most popular and feature-rich solution for integrating file uploading and management into an application. It’s an easy file attachment library for Active Record. The main features of Paperclip include basic file uploading, validations and callbacks, post-processing (which includes generating thumbnails) and finally storing the images in AWS S3/ Google Cloud / Microsoft Azure/ Dropbox, etc.

Paperclip began in August of 2007 by thoughtbot. It was a pet project of Jon Yurek, with the first commit in April of 2008 by Mark Van Holstyn. Since then, it took off, and they defined it as open source for an era. Later, in 2017 they decided to deprecate Paperclip, and that marked a kick start to kt-paperclip.

An important advantage of Paperclip over other alternatives such as Active Storage is that it allows for lot more customisation. Further more it doesn’t uses STI, see some better data model design patterns in newer frameworks. Paperclip generates the image variants at the time of upload, and so unlike Active Storage it doesn’t requires requests to Rails server to check if the image variant exists or needs to be generated.

kt-paperclip is redux of the original paperclip, and is constantly being updated and maintained. The first commit to kt-paperclip came in Nov 2019, where we promised to support and maintain paperclip. Later, we replaced mocha and bourne with rspec mocks, updated it so that post process hooks are not called if validation fails. Now, you can use kt-paperclip with rails version 5 and higher, and it supports newer Ruby Versions as well.

We are very happy to see the total number of downloads increasing significantly over the last few months. Officially thoughtbot has our GitHub link updated on their readme. If you’d like to contribute a feature or bugfix: Thanks! Feel free to create pull requests, open issues. Thanks to all the contributors.