Newsletter

Developing a Standard of Testing for Covid-19 using Data Modeling and AI

, , , ,


Dr. Anthony Fauci recently stated that the Coronavirus behind the current COVID-19 pandemic “is very efficient,” meaning it has the ability to protect and advance itself through its environment with explosive propagative efficiency.

This quote demonstrates that viruses have a billion-year evolutionary head start compared to humans. Where a virus may have simply infected a horseshoe bat with little to no affect with their longer evolutionary traits, humans on the other hand do not have the same evolutionary advantage. The process of natural selection overs eons has given the virus the power to proliferate at the rate of millions of human host infections during a single outbreak.

Natural selection is a theoretical concept about all living beings on this earth. It means that this being will by selecting the best means of survival of its kind, while overcoming challenges to its environment can better its chances of remaining viable and allowing itself to reproduce and regenerate.  As part of this process, this being, whether it is an animal, humans or organisms will beat out other beings in this endeavor, resulting into the demise or extinction of others in nature that compete with it.

The propagative effects of the virus are similar to some mathematical theories. The authors posit that utilizing these mathematical theories in reverse order of the progression of the disease will help mitigate it and prevent it from further progressing—what has been commonly referred to as “flattening the curve.” We have succeeded in using these same mathematical theories to identify the baseline (minimum) level of mitigation required to flatten the curve and ultimately reverse it.

Trend Analysis vs. Data Modeling vs. AI

The prominent methods of statistical analysis utilized to reliably predict future trends, including the spread of diseases such as viruses, are Trend Analysis and Data Modeling.

Trend Analysis takes advantage of minimal and immediately previous data to ascertain the path in which the data will progress in the near term. Naturally, the more data accumulated over the course of collecting, the better and longer term the trend projections can be. Typically, trend analysis focuses on one subject at a time, and does not take into account all variables (due to the small data subset), but it can later be combined with other analyses to achieve better and longer-term projections.

Data Modeling, on the other hand, is the accumulation of vast and varying degrees of data to achieve longer-term and nuanced projections. It is obviously the preferred method to employ as long as sufficient data is available.

Artificial Intelligence takes advantage of data modeling techniques to develop trending beyond what even data modeling can achieve. It uses multiple algorithmic techniques borrowed from mathematical theories to create accurate, reliable projections of complicated trends such as the spread of viral diseases.

 

Pandemics and the Advent of Data Modeling

Defining a standard of testing during a pandemic

The White House Coronavirus task force has employed various data modeling techniques to attempt to project the course of this epidemic. With the help of the University of Washington, it has provided a semblance of the direction of the epidemic, which it has reported in its daily briefings. A key problem with the task force’s approach is that most of its projections utilize trend analysis and not data modeling.

Data modeling requires much more extensive data and detailed mathematical algorithms. It is able to arrive at substantially accurate projections even for the long term. Without it, projections are often unreliable and inaccurate. Let me give you an example of the unreliability of the trend analysis techniques employed by the Coronavirus task force.

On April 6, 2020, the Coronavirus task force initially projected a death rate of 100,000 to 240,000 COVID-19 deaths nationally by the end of the pandemic. The week of April 13 it revised its projection to only 60,000 deaths by August. The revised projection occurred because the death rate had risen to only about 20,000 nationwide at the time. However, within one week—that is, as of April 20—the death rate doubled to over 40,000 deaths. This meant that a level of 60,000 virus deaths looked highly probable by the end of April or sooner. Therefore, the task force’s latest revised trend analysis projection was off by at least three months.

To achieve accurate long-term data modeling during a crisis it is therefore critical to obtain a substantial reliable sample set. Essential to that task is standardization of data accumulation and tracking.

Numerous states have complained about the lack of testing and feverishly sought to increase their own testing because they recognize that testing is the best barometer available to predict the need for vital resources such as Personal Protective Equipment (PPE) and ventilators, and to enable them to provide accurate predictions to the public. A foundational problem, however, is that a standard system of data collection and analysis must be created so that projections are both accurate and comprehensive long-term and nationwide. That standard needs to be both universally applicable and applied so that all participating parties can accurately evaluate their own situation and relate and compare their data to that of their counterparts, and thus be able to evaluate both how their area and the nation itself is doing in addressing the problem.

 

A Standardized Approach to Testing Evaluations

TEC Factor (Test Efficiency Co-Factor)

Unfortunately, the government has not created a standard for data collection and analysis, even to the minimal extent of specifying how many tests must be recorded to form the minimum foundation for analysis. Therefore, iQwest Information Technologies created a system called the Testing Efficiency Co-Factor, or TEC Factor, to help officials easily communicate their virus emergency goals, including when to safely resume normal economic activity. Those goals would be accurately based on needs and trends that have been identified and reliably validated by a TEC Factor number. We propose utilizing 4 different parameters in the calculation of this number.

We have purposely designed the TEC Factor to be in the range of 1 to 100, with the goal of each state to reach a TEC Factor of 100 to begin affecting the progression of the virus and reducing it. If states go over 100, they will be able tackle the progression faster, otherwise the minimum they need to reach is 100.

Based on initial estimates from all states up to April 25th, the total required number of tests across the country approaches 20 Million tests to allow states to get control of their pandemic. This is without accounting for the infected having to retest multiple times to determine if they are still infected.

Since 2010 iQwest Information Technologies has been utilizing data modeling that incorporates artificial intelligence. We have successfully used these metrics for “Big Data” analysis for major clients in the Legal, Life Sciences, Pharmaceutical, Insurance and other fields. Our company has published several articles on this topic that demonstrate the science and reasoning behind this approach. Two such articles referencing these data modeling techniques are available at the following links:

Finding the needle in the digital multilingual haystack

Case Study: How iQwest Helped Samsung Win

The following are the 4 parameters we used to arrive at the TEC Factor.

  1.  Minimum of 4 percent of population testing

Our extensive experience shows that there needs to be a minimum 2-percent sample set of data to even start making any reasonably reliable, albeit limited, projections. With at least 4 percent of data collected, however, certainty increases dramatically for longer-term projections. A data collection of 2-percent, therefore, produces short-term trend analysis, while 4-percent data collection yields more accurate and longer-term data modeling.

Therefore, gathering a substantial sample set (at least 4 percent) of COVID-19 patients would enable accurate data modeling of the virus. This figure is utilized as a goal in our calculations to determine baselines for progress, so before reaching that goal we can achieve somewhat of a broad prediction. The CDC does something similar from time to time. They utilize “Representative Random Sampling” of diseases to better allow them to predict the nature and progression of diseases.

Since this epidemic is not of a constant nature, the 4 percent of data collected cannot be gathered from a single point on the viral timeline, whether earlier or later in the epidemic. Therefore, for example, if some regions lag behind this 4 percent target in testing, the effort to make up for the lost time will be exponential in terms of number of tests. Most of the virus tests performed so far have occurred in the latter part of the crisis (4 million tests as of the week of April 20, which represents less than 2 percent of the U.S. population). This is simply not enough data numerically, nor representatively, to allow government officials to make accurate longer-term projections.

However, this testing target is only one parameter necessary to develop the data model and projections that are the ultimate goal. The cycle of chasing the target number will continue endlessly if we are not vigilant in proactively testing and removing the ill and their immediate contacts from the general population. We therefore propose three additional parameters to help formulate a thorough, accurate projection, since testing at this 4 percent rate is only one of the many mitigation factors.

  1.  Progression/Regression Intersects

The second parameter we propose factors in the progression of the disease. Experience in other countries demonstrates that if mitigation efforts are taken, the general course of the disease is approximately 12 weeks. The following chart graphically illustrates the general course of this pandemic in an afflicted area. The curve may actually be higher or flatter depending on testing and other mitigation factors. Chart 1 illustrates this progression and regression pattern. Since this pandemic is not over, the regression part of the chart has not yet been reached, although this is more aspirational and figures into our calculations as the desired outcome. One key point in evaluating this parameter is that most progressions have occurred with a 30-45 degrees incline and this will figure in our calculations of this parameter.

We posit that to best understand and develop mitigation of the spread of the virus, it is necessary to examine the graph of the viral progression overlaid by the graph of its regression, essentially splitting the above graph in half and folding it on itself. Chart 2 below illustrates the disease progression (orange line) versus the disease regression (yellow line). This graph demonstrates an actual reversal, not just a flattening, of the viral trend if testing is performed at a sufficient rate. Our study indicates that to mitigate the effects of the viral spread, testing needs to be performed minimally in reverse of the viral trend, thus generating a chart that progresses in reverse of the disease progression graph.

The intersection point on the overlaid chart reveals the key point required for the calculation of our second parameter, defined as the mean average of progression/regression or intersect.

The point of intersection (the mean average number) serves as a defined barometer of comparison. We can use this number to compare all regions and evaluate our progress from region to region (State, City or Zip Code). This becomes the second parameter in our calculations. The actual number of this parameter is further refined in the definition of the next parameter. In order to properly calculate this parameter, we rely on the rate of incline of 30-45 degrees referenced in previous sections. If mitigation efforts are not maintained, especially widespread testing, this number will gradually shift towards the 21 day intersect point and maybe even below that for some states. Although in order to create an average to use as a barometer our initial calculations place this number at 28.16 days as of the first week of April 2020 or roughly at about 4 weeks of a 6-week period. Notice that this is more than the halfway point of 21 days in a 6-week period that has been presumed to be the midpoint of evaluative duration. Based on our calculations, we believe the midpoint realistically will occur at 28.16 days versus the 21 illustrated in the chart above. This is primarily due to the existing progression that we have seen so far, and can change slightly over time. Since our primary goal was to establish the TEC Factor at a range of 1 to 100, we will utilize the parameter to better illustrate why this number should start out at a 28.16 value.

3.  6 Week Rolling Period with Different Weighting Factors

 

To make certain the third parameter follows the same time standard as the progression/regression parameter, we utilize the same 6-week period. Since mitigation efforts in a pandemic are important earlier in the progression of the disease, and the progression of the disease is slower earlier on and has an upward trajectory of 30-45 degrees over the course of its incline, we propose a data model with a sample set comprised of three periods of testing of 2 weeks: The initial 2 week period will have the weight of a 3-week period, due to the incline and the importance of this period, followed by a 2-week period, and a final 2-week period with the weight of 1 week. This is a 6-week rolling period that allows us to look back at the previous 6 weeks to help best identify this crucial parameter for defining the TEC Factor.

To properly calculate this parameter, we also defined a baseline of 300,000 tests over the course of this 6-week period, allocating 100,000 tests to each timeline as provided in the illustration below. We can then compare any region’s number of tests with this baseline. Since each State’s testing is dependent on a number of factors, we felt it important to define this baseline to allow for comparative evaluations from State to State.

iQwest has created an algorithm that evaluates tests performed across this 6-week period, with declining weight accorded to the progressing periods. Since natural selection and the disease progression follow the laws of nature and are inherently mathematical, it is logical that the ultimate solution to combat the progression is by using math in the same way nature follows it.

4.  Congestion Factors

The algorithm also takes into account population density across each State, City or Zip Code. It generates a congestion factor to better identify and track the spread factor of the virus. This variable is crucial since congested cities like New York and New Jersey have experienced the highest death rates and incidence of virus in the country. Similarly, mitigation factors such as early mitigation, physical distancing, and contact tracing powerfully affect the progression of the disease and viral projections, although not the topic of this paper and will allow us better focus on defining this mitigation factor. Furthermore, antibody serology tests can also affect the degree in which these measures are necessary. At the end of paper, we have provided a sample of the TEC factor as of April 19th and April 25th for comparative purposes.

We have not included the CF in the calculation of the TEC Factor of the states, mainly due to the vast and varying amounts of empty space in each of the states. If our methodology is adopted for the cities and more densely populated areas, the congestion factor can be included as part of the final calculation. For now, we are simply including it as a guide.

Conclusion

In summary, we have determined the four parameters for proper data modeling and correctly defining the TEC factor, which will enable accurate disease trend projections for the COVID-19 Coronavirus:

  1. A minimum of 4 percent of testing for the population to allow for a sufficient representative sample
  2. A mean average of progression/regression of the disease defined as 28.16
  3. A 6-week rolling data collection period divided into three 2-week periods, each given a different power rating, based on the incline of progression and earlier detection importance. This parameter, together with the first two will allow an accurate calculation of the TEC Factor at any point in time for the previous 6 weeks
  4. Congestion Factor (CF) of each region (State, City, Zip code), which will increase/decrease testing requirements for those regions based on how far their CF is above or below the mean average

With these four parameters satisfied/determined, the TEC Factor can then provide every state, county and municipality in the country with up-to-the-minute checks on their testing performance—provided that all states follow the same standard for comparison purposes. This is why a government-mandated standard of data collection and analysis is essential and is already being sought by major government leaders. The TEC Factor can not only provide government and medical authorities with an accurate projection tool, but also help identify the spread factor (risk of spread) from one state to another.

The TEC factor range is 1-100 over the course of a 6-week period. We posit that to achieve testing efficiency for effective predictive accuracy, each region needs to reach a TEC factor of 100 to gain control of their state’s epidemic.

Furthermore, in our analysis, for every two weeks that mitigation efforts are delayed, the TEC Factor needs to be decreased by up to 25 percent, hence having an inverse effect on the existing mitigation efforts. The Congestion Factor will skew this number by percentage points up or down for locales above or below the average mean Congestion Factor of all states.

By utilizing the four parameters, at any point in time and along with the total population of that region and the number of tests already performed, any region’s TEC Factor can be calculated.

For illustrative purposes, the next page provides a sampling of the TEC Factor for the ten states with the highest number of Coronavirus cases as of April 19th and April 25th. Since these are preliminary numbers based on data we have collected, this table is only for the intent to demonstrate this capability and should not be utilized to enact any policies.

Each region (State, City or Zip code) can calculate their own TEC Factor based on real numbers.

Based on initial estimates from all states up to April 25th, the total required number of tests across the country approaches 20 Million tests to allow states to get control of their pandemic. This is without accounting for the infected having to retest multiple times to determine if they are still infected.

 

 

About iQwest

For almost two decades iQwest has provided electronic discovery, managed services and consulting to AMLAW100 firms and large organizations throughout the technology, pharmaceutical, manufacturing and other industries. We provide organizations with the proper know-how to establish technology solutions internally. We deal with many confidential and court-protected documents, and can provide non-disclosure agreements (NDAs) and contractual assurances regarding any information.

Mr. Peter Afrasiabi, the President of iQwest, is a proven expert at aggregating technology-assisted business processes into organizations. He has almost 30 years of groundbreaking experience in the field, and has been a leader in the litigation support industry for 20 years. He has overseen Management of Services and E-Discovery since the inception of the company, and spearheaded projects involving the processing of over 100 million documents. Previously, he was in charge of the deployment of technology solutions (CRM, email, infrastructure, etc.) across large enterprises. He has a deep knowledge of business processes and project management, and extensive experience working with C-Level executives.

 

Pete Afrasiabi

iQwest Information Technologies, Inc.

844-894-0100 x236 / 714-812-8051 Direct

[email protected]

www.iqwestit.com

https://www.linkedin.com/in/peteafrasiabi

Finding the needle in the digital Multilingual haystack

If you are a Compliance Officer, Litigator or Company Executive that works with large populations of foreign language documents or are contemplating a merger with a multi-national company that will deal with these types of documents, this article is for you. This is not a marketing publication. It delves into the details of how your organization can benefit from a technology that until now has been elusive and often fails when others try to execute on a strategy of maintaining control or reducing the population of documents that are within their infrastructures. At the end of this article you will also learn how you can utilize our services to establish such a solution within your organization.

Finding the needle in the digital Multilingual haystack

Whether you manage your organization’s Ediscovery needs, are a litigator working with multi-national corporations or are a Compliance officer, you commonly work with multilingual document collections. If you are an executive that needs to know everything about your organization, you would have a triage strategy helping you get to the right information ASAP. If the document count is over 50-100k you typically employ native speaking reviewers to perform a linear one by one review of documents or utilize various search mechanisms to help you in this endeavor or both. What you may find is that most documents being reviewed by these expensive reviewers is often irrelevant or requires an expert to review. If the population includes documents from 3 or more languages, then the task becomes even more difficult!

There is a better solution. A solution that if used wisely can benefit your organization, save time/money and a huge amount of headache. I am proposing that in these document populations the first thing you need to do is eliminate non-relevant documents and if they are in a foreign language you need to see an accurate translation of the document. In this article you will learn in detail how to improve quality of these translations using machines at a cost of hundreds of times less than human translation and naturally much faster.

With the advent of new machine translation technologies comes the challenge of proving its efficacy in various industries. Historically MT has been looked at not only inferior but as something to avoid. Unfortunately, the stigma that comes with this technology is not necessarily far from the truth. Adding to that, the incorrect methods utilized in presenting its capabilities by various vendors has led to its demise in active use across most industries. The general feeling is “if we can human translate them, why use an inferior method” and that is true for the most part, except that human translation is very expensive, especially when the subject matter is more than a few hundred documents. So is there really a compromise? Is there a point where we can rely on MT to complement existing human translations?

The goal of this article is to look under the hood of these technologies and provide a defensible argument for how MT can be super charged with human translations. Human being’s innate ability to analyze content provides an opportunity to help and aid some of these machine learning technologies. An attempt to transfer that human based analytical information into a training model for these technologies can provide translation results that are dramatically improved.

Machine Translation technologies are based on dictionaries, translation memories and some rules based grammar that differs from one software solution to another. Although there are newer technologies that utilize statistical analysis and mathematical algorithms to construct these rules and have been available for the past several years, unfortunately individuals that have the core competencies to utilize these technologies are few and far between. On top of that, these software solutions are not by themselves the whole solution and just a part of a larger process that entails understanding language translation and how to utilize various aspects of each language and features of each of the software solutions.

I have personally witnessed most if not all the various technologies utilized in MT and about 5 years ago, developed a methodology that has proven itself in real life situations as well. Below is a link to a case study on a regulatory matter that I worked on.

Regulatory Authority Case Study

If followed correctly, these instructions can turn machine translated documents into documents with minimal post editing requirements and at a cost of hundreds of times less than human translation. They will also look more closely like their human translated counterparts with proper flow of sentence and grammatical accuracy, far beyond the raw machine translated documents. I have referred to this methodology as “Enhanced Machine Translation”, still not human translation but much improved from where we have been till now.

Language Characteristics

To understand the nuances of language translation we first must standardize our understanding of the simplest components within most if not all languages. I have provided a summary of what this may look like below.

  • Definition
    • Standard/Expanded Dictionaries
  • Meaning
    • Dimensions of a words definition in Context
  • Attributes
    • Stereotypical description of characteristics
  • Relations
    • Between concepts, attributes and definitions
  • Linguistics
    • Part of Speech / Grammar Rules
  • Context
    • Common understanding based on existing document examples

Simply accepting that this base of understanding is common amongst most if not all languages is important, since the model we will build on makes assumptions that these building blocks will provide a solid foundation for any solution that we propose.

Furthermore, familiarity with various classes of technologies available is also important, with a clear understanding of each technology solution’s pros and cons. I have included a basic summary below.

  • Basic (Linear) rule based Tools
  • Online Tools (Google, Microsoft, etc.)
  • Statistical Tools
  • Tools combining the best of both worlds of rules based and statistical analysis

Linear Dictionaries & Translation Memories

Pros

  • Ability to understand the form of word (noun, verb, etc.) in a dictionary
  • One to one relationship between words/phrases in translation memories
  • Fully customizable based on language

Cons

  • Inability to formulate correct sentence structure
  • Ambiguous results, often not understandable
  • Usually a waste of resources in most case use examples if relied on exclusively

Statistical Machine Translation

Pros

  • Ability to understand co-occurrence of words and building an algorithm to use as reference
  • Capable of comparing sentence structures based on examples given and further building on the algorithm
  • Can be designed to be case-centric

Cons

  • Words are not numbers
  • No understanding of form of words
  • Results could be similar to some concept searching tools that often fall off the cliff if relied on too much

Now that we understand what is available, building a model and process that takes advantage of benefits of various technologies, while minimizing the disadvantages of them would be crucial. In order to enhance any and all of these solution’s capabilities, it is important to understand that machines and machine learning by itself cannot be the only mechanism we build our processes on. This is where human translations come into the picture. If there was some way to utilize the natural ability of human translators to analyze content and build out a foundation for our solutions, would we be able improve on the resulting translations? The answer is a resounding yes!

BabelQwest: A combination of tools designed to assist in Enhancing Quality of MT

To understand how we would accomplish this, we need to review some of the machine based concept analysis terminologies first. In a nutshell these definitions and solutions are what we have actually based our solutions on. I have made reference to some of the most important of these definitions below. I have also enhanced these definitions with how as linguists and technologists we will utilize them in building out the “Enhanced Machine Translation” (EMT for short) solutions.

  • Classification: Gather a select representative set of the documents from the existing document corpus that represent the majority of subject matters to be analyzed
  • Clustering: Build out documents selected in the classification stage to find similar documents that match the cluster definitions and algorithms of the representative documents
  • Summarization: Select key sections of these documents as keywords, phrases and summaries
  • N-Grams: N-Grams are the basic co-occurrence of multiple words that are within any context. We will build these N-Grams from the summarization stage earlier and create a spreadsheet with each depicting each N-Gram and their raw machine translated counterparts. The spreadsheet is built into a voting worksheet that allows human translators to analyze each line and provide feedback as to the correct translations and even whether certain N-Grams captured should be part of the final training seed data or not. This seed data will fine tune the algorithms built out in the next stage down to the context level and with human input. A basic depiction of this spreadsheet is shown below.

  • Simultaneously human translate the source documents that generated these N-Grams. The human translation stage will build out a number of document pairs with the original content in the original language in one document and the human translated English version in another document. These will be imported into a statistical and analytical model to build the basic algorithms. By incorporating these human translated documents into the statistical translation engine training, the engine will discover word co-occurrences and their relations to the sentences they appear in as well as discovering variations of terms as they appear in different sentences. They will be further fine-tuned with the results of the N-Gram extraction and translation performed by human translators.
  • Define and/or extract key names and titles of key individuals. This stage is crucial and usually the simplest information to gather, since most if not all parties involved already have references in email addresses, company org charts, etc. that can be gathered easily.
  • Start training process of translation engines from the results of the steps above (multilevel and conditioned on volume and type of documents)
  • Once a basic training model has been built we would test machine translate original representative documents and compare with their human translated counterparts. This stage can be accomplished with as little as less than one hundred documents to prove the efficacy of this process. This is why we refer to this stage as the “Pilot” stage.
  • Repeat the same steps with a larger subset of documents to build a larger training model and to prove the overall process is fruitful and can be utilized to machine translate the entire document corpus. We refer to this stage as the “Proof of Concept” stage and it is the final stage. We would then start staging the entirety of the documents subject to this process in a “Batch Process” stage.

In summary, we are building a foundation based on human intellect and analytical abilities to perform the final translations. In using an analogy of a large building, the representative documents and their human translated counterparts (pairs) serve as the concrete foundation and steel beams, the N-Grams serve as the building blocks in between the steel beams and the key names and titles of individuals serve as the facia of the building.

Naturally we are not looking to replace human translation completely and in cases where certified human translations are necessary (Regulatory compliance, court submitted documents, etc.) we will still rely heavily on this aspect of the solution. Although the overall time and expense to complete a large-scale translation project is reduced by hundreds of times. The following chart depicts the ROI of a case on a time scale to help understand the impact such a process can have.

This process has additional benefits as well. Imagine for a moment a document production with over 2 Million of Korean language documents that were produced over a long-time scale and from various locations across the world. Your organization has a choice of either reviewing every single document and classifying them into various categories utilizing native Korean native reviewers or utilize an Enhanced Machine Translation process to provide a larger contingent of English speaking reviewers to search and eliminate non-relevant and classify the remainder of the documents.

One industry that this solution offers immense benefits is in the Electronic Discovery & litigation support industry, where majority of attorneys that are experts in various fields are English speaking attorneys and by utilizing these resources along with elaborate searching mechanisms (Boolean, Stemming, Concept Search, etc.) in English they can quickly reduce the population of documents. On the other hand, if the law firm relied only on native speaking human reviewers, a crew of 10 expert attorney reviewers, each reviewing 50 documents per hour (4000 documents per day on an 8-hour shift) would take them 500 working days to complete the review, which each charging hourly rates that can add up very quickly.

We have constructed a chart from data over the past 15 years performing this type of work for some of the largest law firms around the world that shows the impact of a proper document reduction or classification strategy may have at every stage of their litigation. Please note the bars start from the bottom to top, with MT being the brown shaded area.

The difference is stark and if proper care is not given to implementation it often prevents organizations from knowing the content of documents within their control or supervision. This becomes a real issue with Compliance Officers that must rely on knowing every communication that occurs or has occurred within their organization at any given time.

About iQwest

We have been providing electronic discovery, managed services and consulting to AMLAW100 firms and large organizations throughout the technology, pharmaceutical, manufacturing and other industries for almost 2 decades. We can provide your organization with the proper know-how of establishing this solution internally as well as provide EMT services on a contractual basis. Since we work with many law firms and deal with many documents that are confidential and mostly court protected, we can provide assurances as to the non-disclosure of any information through contractual agreements. We are also looking to expand our know-how in this core competency and are exploring partnering up with organizations that can help us scale up our capabilities into newer industries.

In summary, we are a smaller organization with a niche that can benefit the larger organizations.

  • iQwest is looking to scale up our capabilities with the right partners
  • Company vision is to expand our know how into new industries/verticals
  • Training of internal resources can be accomplished in 6 months with minimal capital investment required
  • We currently have the capability to provide close to 1 million translations per week
  • Expanding processing capabilities to Zurich, Frankfurt and Hong Kong

Mr. Pete Afrasiabi the President of iQwest, is a veteran of aggregating technology assisted business processes into organizations for almost 3 decades and in the litigation support industry for 18. He has been involved with projects involving MT (over 100 million documents processed), Manages Services and Ediscovery since the inception of the company as well as deployment of technology solutions (CRM, Email, Infrastructure, etc.) across large enterprises prior to that. He has a deep knowledge of business processes, project management and extensive experience working with C-Level executives.

Pete Afrasiabi
iQwest Information Technologies, Inc.
844-894-0100 x236
[email protected]
www.iqwestit.com