Introducing corpora.ai... Encore

At Corpora.ai we continue to evolve every aspect of our product and approach in attempt to provide the most accurate, usable and insightful research. This week we are releasing a set of features which we believe will become pivotal tools to assist users with their targeted or generic research.

Introducing corpora.ai... Encore

corpora.ai Prompting Language

Functional Research at the tips of your fingers

At corpora.ai we continue to evolve every aspect of our product and approach in attempt to provide the most accurate, usable and insightful research. This week we are releasing a set of features which we believe will become pivotal tools to assist users with their targeted or generic research. At the time of writing this article, all features are available and usable on corpora.ai.

Since our initial public beta release, we have been monitoring user queries and have been discussing experiences with users in specific research disciplines. We are surprised and exhilarated by the amount of positive feedback we have received, and are appreciative and thankful to everyone who has used corpora.ai so far - you are helping us improve research for everyone. The feedback also highlighted a desire from users to have more control on certain aspects of the process that discovers, ingests, and produces the research books.

One such area of the process that users identified their desire to have control over was the content sources. Upon learning this - we agreed! And so we have worked to expose a pseudo query language called corpora.ai Prompting Language (CPL) which allows users to dictate which source group(s) they are wanting to research. This, along with other aspects of CPL is discussed in greater granularity in this article corpora.ai: Tips and Tricks Pt. 1 of.... Below are quick examples, please refer to the linked article for a full explanation and more in-depth examples of this function of CPL as well as list of supported source groups.

A further area of improvement is exposing the control of source content language. A user can now define a research hypothesis and control what source languages they wish to build their research book(s) on. This doesn't affect the output language, but that too is now available to be controlled through CPL. For example a user can have an international political hypothesis, and produce books that show each party's view of the hypothesis, while controlling the output language to be the users native tongue. This is also described in greater detail in the aforementioned article corpora.ai: Tips and Tricks Pt. 1 of... where links to examples are included.

We quickly discovered users attempting to compare conceptual topics or concrete individuals or conditions. We had already began working on this key piece before this realization, but quickly prioritized the work.

We have now implemented an update to the Natural Language Understanding (NLU) engine that will comprehend and structure the resulting sequence of queries to our corpus to provide the best metrics of comparison and build a research book on each competitor. This function cannot be used alongside source filtering at this moment in time. This functionality is covered in much greater detail with examples in the article linked above, but below are a few example screenshots of CPL structure that results in the comparison query output.

Comparison queries also produce a different overview structure, which shows a quick summary of the topic and the metrics being compared across entities found as competitors. These metrics are preserved across all books that are built to promote reader comparison between each of the entities while reading across books.

Dates are now observed in CPL which allows users to identify specifically what timeframe their research focuses on. This can be defined as from and to, or between or even after and before among many other permeations of natural language.

This release is described as v2.0 due to the large variety of work other than CPL that has been completed. These updates and fixes affect mainly the complex and proprietary services that underpin corpora.ai. These are not limited to but include the following:

  • Multiple UI Bug Fixes
  • Introduction of CPL
  • Significant performance and optimization gains of the Core Processing Engine
  • Engine Bug Fixes
  • Storage Bug Fixes
  • Functionality features added to generated research books

We believe the core features of CPL will be the force to push corpora.ai into a different realm. Users are now offered the opportunity to parameterize, control and compare deep research topics at great speed in an extensible and approachable form. CPL will continue to be improved and evolved by our passion and by user feedback.

CPL is a feature of corpora.ai, the product and underlying bespoke engine and complex web of services will grow and expand as we continue to grow as a team and as a business. We welcome your feedback and accept feedback via email at support@corpora.ai, X @corpora_ai, LinkedIn or any other of our social platforms as well as the comments below.