The following are the changes we are either working on now or are
going to work on in the nearest time.
Secondary indexes => higher performance.
Docstore for columnar attributes => higher performance.
Read-only listeners => better security.
Bulk insert/replace via HTTP JSON => higher performance.
Keepalive support in HTTP for multi-queries => ease of use.
Further columnar storage performance optimizations.
Making full-text optional. Manticore is not only about full-text,
but still requires at least one full-text field in each index. It’s time
to change it.
New https://repo.manticoresearch.com/ backend => ease of use,
safer place for packages.
New /cli endpoint for running SQL queries over HTTP
even easier
Really bulk INSERT/REPLACE/DELETE via JSON over HTTP
Minor changes
TODO
Breaking changes
Changed behaviour of REST /sql
endpoint: /sql?mode=raw now requires escaping
Index meta file format change. The new version will
convert older indexes automatically, but:
you can get warning like WARNING: ... syntax error, unexpected
TOK_IDENT
you won’t be able to run the index with previous Manticore versions,
make sure you have a backup
Format change of the response of bulk
INSERT/REPLACE/DELETE requests:
previously each sub-query constituted a separate transaction and
resulted in a separate response
now the whole batch is considered a single transaction, which
returns a single response
Bugfixes
TODO
Version 4.2.0, Dec 23 2021
Major new features
Pseudo-sharding support for real-time indexes and full-text
queries. In previous release we added limited pseudo sharding
support. Starting from this version you can get all benefits of the
pseudo sharding and your multi-core processor by just enabling searchd.pseudo_sharding.
The coolest thing is that you don’t need to do anything with your
indexes or queries for that, just enable it and if you have free CPU it
will be used to lower your response time. It supports plain and
real-time indexes for full-text, filtering and analytical queries. For
example, here is how enabling pseudo sharding can make most queries’
response time in average about 10x lower on Hacker news curated comments
dataset multiplied 100 times (116 million docs in a plain
index).
PQ transactions are now atomic and isolated. Previously PQ
transactions support was limited. It enables much faster REPLACE
into PQ, especially when you need to replace a lot of rules at
once. Performance details:
4.0.2
It takes 48 seconds to insert 1M PQ rules and
406 seconds to REPLACE just 40K in 10K batches.
root@perf3 ~ # mysql -P9306 -h0 -e "drop table if exists pq; create table pq (f text, f2 text, j json, s string) type='percolate';"; date; for m in `seq 11000`; do (echo -n "insert into pq (id,query,filters,tags) values "; for n in `seq 11000`; do echo -n "(0,'@f (cat | ( angry dog ) | (cute mouse)) @f2 def', 'j.json.language=\"en\"', '{\"tag1\":\"tag1\",\"tag2\":\"tag2\"}')"; [ $n !=1000 ] && echo -n ","; done; echo ";")|mysql -P9306 -h0; done; date; mysql -P9306 -h0 -e "select count(*) from pq"Wed Dec2210:24:30 AM CET 2021Wed Dec2210:25:18 AM CET 2021+----------+| count(*) |+----------+| 1000000 |+----------+root@perf3 ~ # date; (echo "begin;"; for offset in `seq 01000030000`; do n=0; echo "replace into pq (id,query,filters,tags) values "; foridin `mysql -P9306 -h0 -NB -e "select id from pq limit $offset, 10000 option max_matches=1000000"`; do echo "($id,'@f (tiger | ( angry bear ) | (cute panda)) @f2 def', 'j.json.language=\"de\"', '{\"tag1\":\"tag1\",\"tag2\":\"tag2\"}')"; n=$((n+1)); [ $n !=10000 ] && echo -n ","; done; echo ";"; done; echo "commit;") >/tmp/replace.sql; dateWed Dec2210:26:23 AM CET 2021Wed Dec2210:26:27 AM CET 2021root@perf3 ~ # time mysql -P9306 -h0 </tmp/replace.sqlreal 6m46.195suser 0m0.035ssys 0m0.008s
4.2.0
It takes 34 seconds to insert 1M PQ rules and
23 seconds to REPLACE them in 10K batches.
root@perf3 ~ # mysql -P9306 -h0 -e "drop table if exists pq; create table pq (f text, f2 text, j json, s string) type='percolate';"; date; for m in `seq 11000`; do (echo -n "insert into pq (id,query,filters,tags) values "; for n in `seq 11000`; do echo -n "(0,'@f (cat | ( angry dog ) | (cute mouse)) @f2 def', 'j.json.language=\"en\"', '{\"tag1\":\"tag1\",\"tag2\":\"tag2\"}')"; [ $n !=1000 ] && echo -n ","; done; echo ";")|mysql -P9306 -h0; done; date; mysql -P9306 -h0 -e "select count(*) from pq"Wed Dec2210:06:38 AM CET 2021Wed Dec2210:07:12 AM CET 2021+----------+| count(*) |+----------+| 1000000 |+----------+root@perf3 ~ # date; (echo "begin;"; for offset in `seq 010000990000`; do n=0; echo "replace into pq (id,query,filters,tags) values "; foridin `mysql -P9306 -h0 -NB -e "select id from pq limit $offset, 10000 option max_matches=1000000"`; do echo "($id,'@f (tiger | ( angry bear ) | (cute panda)) @f2 def', 'j.json.language=\"de\"', '{\"tag1\":\"tag1\",\"tag2\":\"tag2\"}')"; n=$((n+1)); [ $n !=10000 ] && echo -n ","; done; echo ";"; done; echo "commit;") >/tmp/replace.sql; dateWed Dec2210:12:31 AM CET 2021Wed Dec2210:14:00 AM CET 2021root@perf3 ~ # time mysql -P9306 -h0 </tmp/replace.sqlreal 0m23.248suser 0m0.891ssys 0m0.047s
Minor changes
optimize_cutoff
is now available as a configuration option in section
searchd. It’s useful when you want to limit the RT chunks
count in all your indexes to a particular number globally.
PR
#598 bigint support for YEAR() and other timestamp
functions.
Commit
8e85d4bc Adaptive rt_mem_limit.
Previously Manticore Search was collecting exactly up to
rt_mem_limit of data before saving a new disk chunk to
disk, and while saving was still collecting up to 10% more (aka
double-buffer) to minimize possible insert suspension. If that limit was
also exhausted, adding new documents was blocked until the disk chunk
was fully saved to disk. The new adaptive limit is built on the fact
that we have auto-optimize now,
so it’s not a big deal if disk chunks do not fully respect
rt_mem_limit and start flushing a disk chunk earlier. So,
now we collect up to 50% of rt_mem_limit and save that as a
disk chunk. Upon saving we look at the statistics (how much we’ve saved,
how many new documents have arrived while saving) and recalculate the
initial rate which will be used next time. For example, if we saved 90
million documents, and another 10 million docs arrived while saving, the
rate is 90%, so we know that next time we can collect up to 90% of
rt_mem_limit before starting flushing another disk chunk.
The rate value is calculated automatically from 33.3% to 95%.
Binlog version was increased, binlog from previous version won’t be
replayed, so make sure you stop Manticore Search cleanly during upgrade:
no binlog files should be in /var/lib/manticore/binlog/
except binlog.meta after stopping the previous
instance.
Commit
3f659f36 new column “chain” in show threads option
format=all. It shows stack of some task info tickets, most useful
for profiling needs, so if you are parsing show threads
output be aware of the new column.
searchd.workers was obsoleted since 3.5.0, now it’s
deprecated, if you still have it in your configuration file it will
trigger a warning on start. Manticore Search will start, but with a
warning.
If you use PHP and PDO to access Manticore you need to do
PDO::ATTR_EMULATE_PREPARES
Bugfixes
❗Issue
#650 Manticore 4.0.2 slower than Manticore 3.6.3. 4.0.2 was faster
than previous versions in terms of bulk inserts, but significantly
slower for single document inserts. It’s been fixed in 4.2.0.
❗Commit
22f4141b RT index could get corrupted under intensive REPLACE load,
or it could crash
Commit
03be91e4 fixed average at merging groupers and group N sorter; fixed
merge of aggregates
Issue
#679 Batch queries causing crashes again with v4.0.3
Commit
f7f8bd8c fixed daemon crash on startup trying to re-join cluster
with invalid nodes list
Issue
#643 Manticore 4.0.2 does not accept connections after batch of
inserts
Issue
#635 FACET query with ORDER BY JSON.field or string attribute could
crash
Issue
#634 Crash SIGSEGV on query with packedfactors
Commit
41657f15 morphology_skip_fields was not supported by create
table
Version 4.0.2, Sep 21 2021
Major new features
Full support of Manticore Columnar
Library. Previously Manticore Columnar Library was
supported only for plain indexes. Now it’s supported:
in real-time indexes for INSERT, REPLACE,
DELETE, OPTIMIZE
in replication
in ALTER
in indextool --check
Automatic indexes compaction (Issue
#478). Finally, you don’t have to call OPTIMIZE manually or via a
crontask or other kind of automation. Manticore now does it for you
automatically and by default. You can set default compaction threshold
via optimize_cutoff
global variable.
Chunk snapshots and locks system revamp. These
changes may be invisible from outside at first glance, but they improve
the behaviour of many things happening in real-time indexes
significantly. In a nutshell, previously most Manticore data
manipulation operations relied on locks heavily, now we use disk chunk
snapshots instead.
Significantly faster bulk INSERT performance into a
real-time index. For example on Hetzner’s
server AX101 with SSD, 128 GB of RAM and AMD’s Ryzen™ 9 5950X (16*2
cores) with 3.6.0 you could get 236K docs per second
inserted into a table with schema name text, email string,
description text, age int, active bit(1) (default
rt_mem_limit, batch size 25000, 16 concurrent insert
workers, 16 million docs inserted overall). In 4.0.2 the same
concurrency/batch/count gives 357K docs per second.
read operations (e.g. SELECTs, replication) are performed with
snapshots
operations that just change internal index structure without
modifying schema/documents (e.g. merging RAM segments, saving disk
chunks, merging disk chunks) are performed with read-only snapshots and
replace the existing chunks in the end
UPDATEs and DELETEs are performed against existing chunks, but for
the case of merging that may be happening the writes are collected and
are then applied against the new chunks
UPDATEs acquire an exclusive lock sequentially for every chunk.
Merges acquire a shared lock when entering the stage of collecting
attributes from the chunk. So at the same time only one (merge or
update) operation has access to attributes of the chunk.
when merging gets to the phase, when it needs attributes it sets a
special flag. When UPDATE finishes, it checks the flag, and if it’s set,
the whole update is stored in a special collection. Finally, when the
merge finishes, it applies the updates set to the newborn disk
chunk.
ALTER runs via an exclusive lock
replication runs as a usual read operation, but in addition saves
the attributes before SST and forbids updates during the SST
ALTER can
add/remove a full-text field (in RT mode). Previously it could
only add/remove an attribute.
🔬 Experimental: pseudo-sharding for full-scan
queries - allows to parallelize any non-full-text search query.
Instead of preparing shards manually you can now just enable new option
searchd.pseudo_sharding
and expect up to CPU cores lower response time for
non-full-text search queries. Note it can easily occupy all existing CPU
cores, so if you care not only about latency, but throughput too - use
it with caution.
Minor changes
Linux Mint and Ubuntu Hirsute Hippo are supported via APT
repository
faster update by id via HTTP in big indexes in some cases (depends
on the ids distribution)
time curl -X POST -d '{"update":{"index":"idx","id":4611686018427387905,"doc":{"mode":0}}}' -H "Content-Type: application/x-ndjson" http://127.0.0.1:6358/json/bulk
real 0m43.783s
user 0m0.008s
sys 0m0.007s
4.0.2
time curl -X POST -d '{"update":{"index":"idx","id":4611686018427387905,"doc":{"mode":0}}}' -H "Content-Type: application/x-ndjson" http://127.0.0.1:6358/json/bulk
real 0m0.006s
user 0m0.004s
sys 0m0.001s
custom
startup flags for systemd. Now you don’t need to start searchd
manually in case you need to run Manticore with some specific startup
flag
new function LEVENSHTEIN()
which calculates Levenshtein distance
added new searchd
startup flags--replay-flags=ignore-trx-errors and
--replay-flags=ignore-all-errors so one can still start
searchd if the binlog is corrupted
the new version can read older indexes, but the older versions can’t
read Manticore 4’s indexes
removed implicit sorting by id. Sort explicitly if required
charset_table’s default value changes from 0..9,
A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F,
U+401->U+451, U+451 to non_cjk
OPTIMIZE happens automatically. If you don’t need it
make sure to set auto_optimize=0 in section
searchd in the configuration file
Issue
#616ondisk_attrs_default were deprecated, now they are
removed
for contributors: we now use Clang compiler for Linux builds as
according to our tests it can build a faster Manticore Search and
Manticore Columnar Library
if max_matches is
not specified in a search query it gets updated implicitly with the
lowest needed value for the sake of performance of the new columnar
storage. It can affect metric total in SHOW META,
but not total_found which is the actual number of found
documents.
Migration from Manticore 3
make sure you a stop Manticore 3 cleanly:
no binlog files should be in /var/lib/manticore/binlog/
(only binlog.meta should be in the directory)
otherwise the indexes Manticore 4 can’t reply binlogs for won’t be
run
the new version can read older indexes, but the older versions can’t
read Manticore 4’s indexes, so make sure you make a backup if you want
to be able to rollback the new version easily
if you run a replication cluster make sure you:
stop all your nodes first cleanly
and then start the node which was stopped last with
--new-cluster (run tool manticore_new_cluster
in Linux).
Commit
696f8649 - fixed crash during SST on joiner with active index; added
sha1 verify at joiner node at writing file chunks to speed up index
loading; added rotation of changed index files at joiner node on index
load; added removal of index files at joiner node when active index gets
replaced by a new index from donor node; added replication log points at
donor node for sending files and chunks
Commit
b296c55a - crash on JOIN CLUSTER in case the address is
incorrect
Commit
418bf880 - while initial replication of a large index the joining
node could fail with ERROR 1064 (42000): invalid GTID,
(null), the donor could become unresponsive while another node
was joining
Commit
6fd350d2 - hash could be calculated wrong for a big index which
could result in replication failure
Issue
#615 - replication failed on cluster restart
Issue
#618 - searchd –stopwait fails under root. It also fixes systemctl
behaviour (previously it was showing failure for ExecStop and didn’t
wait long enough for searchd to stop properly)
Issue
#619 - INSERT/REPLACE/DELETE vs SHOW STATUS.
command_insert, command_replace and others
were showing wrong metrics
Issue
#620 - charset_table for a plain index had a wrong
default value
Issue
#607 - Manticore cluster node crashes when unable to resolve a node
by name
Issue
#623 - replication of updated index can lead to undefined state
Commit
ca03d228 - indexer could hang on indexing a plain index source with
a json attribute
Commit
53c75305 - fixed not equal expression filter at PQ index
Commit
ccf94e02 - fixed select windows at list queries above 1000 matches.
SELECT * FROM pq ORDER BY id desc LIMIT 1000 , 100 OPTION
max_matches=1100 was not working previously
Commit
a0483fe9 - HTTPS request to Manticore could cause warning like “max
packet size(8388608) exceeded”
Issue
#648 - Manticore 3 could hang after a few updates of string
attributes
Fully revised histograms. When building an index Manticore also
builds histograms for each field in it, which it then uses for faster
filtering. In 3.6.0 the algorithm was fully revised and you can get a
higher performance if you have a lot of data and do a lot of
filtering.
Minor changes
tool manticore_new_cluster [--force] useful for
restarting a replication cluster via systemd
faster
JSON parsing, our tests show 3-4% lower latency on queries like
WHERE json.a = 1
non-documented command DEBUG SPLIT as a prerequisite
for automatic sharding/rebalancing
Bugfixes
Issue
#584 - inaccurate and unstable FACET results
Issue
#506 - Strange behavior when using MATCH: those who suffer from this
issue need to rebuild the index as the problem was on the phase of
building an index
Issue
#387 - intermittent core dump when running query with SNIPPET()
function
Stack optimizations useful for processing complex queries:
Commit
4795dc49 - percolate index filter and tags were empty for empty
stored query (test 369)
Commit
c3f0bf4d - breaks of replication SST flow at network with long
latency and high error rate (different data centers replication);
updated replication command version to 1.03
Commit
ba2d6619 - joiner lock cluster on write operations after join into
cluster (test 385)
Commit
de4dcb9f - wildcards matching with exact modifier (test 321)
Commit
812dab74 - wrong weight for phrase starting with wildcard
Commit
1771afc6 - percolate query with wildcards generate terms without
payload on matching causes interleaved hits and breaks matching (test
417)
Commit
aa0d8c2b - fixed calculation of ‘total’ in case of parallelized
query
Commit
18d81b3c - crash in Windows with multiple concurrent sessions at
daemon
Commit
84432f23 - some index settings could not be replicated
Commit
93411fe6 - On high rate of adding new events netloop sometimes
freeze because of atomic ‘kick’ event being processed once for several
events a time and loosing actual actions from them status of the query,
not the server status
Commit
d805fc12 - New flushed disk chunk might be lost on commit
Commit
ff716353 - TRUNCATE WITH RECONFIGURE worked wrong with stored
fields
Breaking changes:
New binlog format: you need to make a clean stop of Manticore before
upgrading
Index format slightly changes: the new version can read you existing
indexes fine, but if you decide to downgrade from 3.6.0 to an older
version the newer indexes will be unreadable
Replication format change: don’t replicate from an older version to
3.6.0 and vice versa, switch to the new version on all your nodes at
once
reverse_scan is deprecated. Make sure you don’t use
this option in your queries since 3.6.0 since they will fail
otherwise
As of this release we don’t provide builds for RHEL6, Debian Jessie
and Ubuntu Trusty any more. If it’s mission critical for you to have
them supported contact
us
Deprecations
No more implicit sorting by id. If you rely on it make sure to
update your queries accordingly
Search option reverse_scan has been deprecated
Version 3.5.4, Dec 10 2020
New Features
New Python, Javascript and Java clients are generally available now
and are well documented in this manual.
automatic drop of a disk chunk of a real-time index. This
optimization enables dropping a disk chunk automatically when OPTIMIZing
a real-time index when the chunk is obviously not needed any more (all
the documents are suppressed). Previously it still required merging, now
the chunk can be just dropped instantly. The cutoff
option is ignored, i.e. even if nothing is actually merged an obsoleted
disk chunk gets removed. This is useful in case you maintain retention
in your index and delete older documents. Now compacting such indexes
will be faster.
Issue
#453 New option indexer.ignore_non_plain=1
is useful in case you run indexer --all and have not only
plain indexes in the configuration file. Without
ignore_non_plain=1 you’ll get a warning and a respective
exit code.
Commit
ea6850e4 count distinct returns 0 at low max_matches on a local
index
Commit
362f27db When using aggregation stored texts are not returned in
hits
Version 3.5.2, Oct 1 2020
New features
OPTIMIZE reduces disk chunks to a number of chunks ( default is
2* No. of cores) instead of a single one. The optimal
number of chunks can be controlled by cutoff
option.
NOT operator can be now used standalone. By default it is disabled
since accidental single NOT queries can be slow. It can be enabled by
setting new searchd directive not_terms_only_allowed
to 0.
New setting max_threads_per_query
sets how many threads a query can use. If the directive is not set, a
query can use threads up to the value of threads. Per
SELECT query the number of threads can be limited with OPTION threads=N overriding
the global max_threads_per_query.
Percolate indexes can be now be imported with IMPORT
TABLE.
HTTP API /search receives basic support for faceting/grouping by new query node
aggs.
Minor changes
If no replication listen directive is declared, the engine will try
to use ports after the defined ‘sphinx’ port, up to 200.
listen=...:sphinx needs to be explicit set for SphinxSE
connections or SphinxAPI clients.
SHOW
INDEX STATUS outputs new metrics: killed_documents,
killed_rate, disk_mapped_doclists,
disk_mapped_cached_doclists,
disk_mapped_hitlists and
disk_mapped_cached_hitlists.
SQL command status now outputs
Queue\Threads and Tasks\Threads.
Deprecations:
dist_threads is completely deprecated now, searchd will
log a warning if the directive is still used.
Docker
The official Docker image is now based on Ubuntu 20.04 LTS
Packaging
Besides the usual manticore package, you can also
install Manticore Search by components:
manticore-server - provides searchd,
config and service files
This release took so long, because we were working hard on
changing multitasking mode from threads to coroutines.
It makes configuration simpler and queries parallelization much more
straightforward: Manticore just uses given number of threads (see new
setting threads) and
the new mode makes sure it’s done in the most optimal way.
any highlighting that works with several fields
(highlight({},'field1, field2') or highlight
in json queries) now applies limits per-field by default.
any highlighting that works with plain text (highlight({},
string_attr) or snippet() now applies limits to the
whole document.
per-field
limits can be switched to global limits by
limits_per_field=0 option (1 by default).
allow_empty is
now 0 by default for highlighting via HTTP JSON.
The same port can
now be used for http, https and binary API (to accept connections
from a remote Manticore instance). listen = *:mysql is
still required for connections via mysql protocol. Manticore now detects
automatically the type of client trying to connect to it except for
MySQL (due to restrictions of the protocol).
In plain
mode it’s called sql_field_string. Now it’s available
in RT
mode for real-time indexes too. You can use it as shown in the
example:
createtable t(f string attributeindexed);insertinto t values(0,'abc','abc');select*from t where match('abc');+---------------------+------+| id | f |+---------------------+------+| 2810845392541843463 | abc |+---------------------+------+1rowinset (0.01 sec)mysql>select*from t where f='abc';+---------------------+------+| id | f |+---------------------+------+| 2810845392541843463 | abc |+---------------------+------+1rowinset (0.00 sec)
thread_stack
now limits maximum thread stack, not initial.
Improved SHOW THREADS output.
Display progress of long CALL PQ in SHOW
THREADS.
cpustat, iostat, coredump can be changed during runtime with SET.
SET [GLOBAL] wait_timeout=NUM implemented ,
Breaking changes:
Index format has been changed. Indexes built in
3.5.0 cannot be loaded by Manticore version < 3.5.0, but Manticore
3.5.0 understands older formats.
INSERT
INTO PQ VALUES() (i.e. without providing column list)
previously expected exactly (query, tags) as the values.
It’s been changed to (id,query,tags,filters). The id can be
set to 0 if you want it to be auto-generated.
allow_empty=0
is a new default in highlighting via HTTP JSON interface.
Only absolute paths are allowed for external files (stopwords,
exceptions etc.) in CREATE TABLE/ALTER
TABLE.
Deprecations:
ram_chunks_count was renamed to
ram_chunk_segments_count in SHOW INDEX
STATUS.
workers is obsolete. There’s only one workers mode
now.
dist_threads is obsolete. All queries are as much
parallel as possible now (limited by threads and
jobs_queue_size).
max_children is obsolete. Use threads to set the
number of threads Manticore will use (set to the # of CPU cores by
default).
queue_max_length is obsolete. Instead of that in
case it’s really needed use jobs_queue_size
to fine-tune internal jobs queue size (unlimited by default).
All /json/* endpoints are now available w/o
/json/, e.g. /search, /insert,
/delete, /pq etc.
field meaning “full-text field” was renamed to
“text” in describe.
3.4.2:
mysql> describe t;+-------+--------+----------------+| Field | Type | Properties |+-------+--------+----------------+| id | bigint | || f | field | indexed stored |+-------+--------+----------------+
3.5.0:
mysql> describe t;+-------+--------+----------------+| Field | Type | Properties |+-------+--------+----------------+| id | bigint | || f | text | indexed stored |+-------+--------+----------------+
Cyrillic и doesn’t map to i in
non_cjk charset_table (which is a default) as it affected
Russian stemmers and lemmatizers too much.
read_timeout. Use network_timeout
instead which controls both reading and writing.
Packages
Ubuntu Focal 20.04 official package
deb package name changed from manticore-bin to
manticore
agent_retry_count
in case of agents with mirrors gives the value of retries per mirror
instead of per agent, the total retries per agent being
agent_retry_count*mirrors.
Commit
3359bcd8 refactored master-agent network polling on kqueue-based
systems (Mac OS X, BSD).
Version 2.6.0, 29 December
2017
Features and improvements
HTTP JSON: JSON
queries can now do equality on attributes, MVA and JSON attributes can
be used in inserts and updates, updates and deletes via JSON API can be
performed on distributed indexes
Removed support for 32-bit docids from the code. Also removed all
the code that converts/loads legacy indexes with 32-bit docids.
Morphology
only for certain fields . A new index directive
morphology_skip_fields allows defining a list of fields for which
morphology does not apply.
lots of minor fixes after thorough static code analysis
other minor bugfixes
Upgrade
In this release we’ve changed internal protocol used by masters and
agents to speak with each other. In case you run Manticoresearch in a
distributed environment with multiple instances make sure your first
upgrade agents, then the masters.
Version 2.5.1, 23 November
2017
Features and improvements
JSON queries on HTTP
API protocol. Supported search, insert, update, delete, replace
operations. Data manipulation commands can be also bulked, also there
are some limitations currently as MVA and JSON attributes can’t be used
for inserts, replaces or updates.