How flush all keys on one namespace only? - namespaces

I want know how we can delete all keys for a specific namespace only?
FLUSHALL
delete all keys, its a problem when using multiple app with same redis server.

you can not flush a namespace in Redis, while you can flush all keys matching a pattern.
$ redis-cli --scan --pattern 'user:*' | xargs redis-cli unlink

You should use (always with caution) FLUSHDB after ensuring you had SELECTed the right database.
On a related subject, you should consider using shared databases really carefully - there are all sorts of nastiness that can ensue from as well as the fact that it isn't compatible with the upcoming Redis cluster (v3 that is in beta, expected release EOY). You may want to look at this benchmark post about Shared vs. Dedicated Redis instances for more background on the subject.

flush(namespace){
console.log('delete namespace: ', namespace);
redis.defineCommand('flush', {
numberOfKeys: 0,
lua: 'local keys = redis.call(\'keys\', ARGV[1]) for i=1,#keys,5000 do ' +
'redis.call(\'del\', unpack(keys, i, math.min(i+4999, #keys))) end return keys'
});
}

Related

Deleting a front door backend by address, not index

Using the Azure CLI, I would like to remove a series of backends from a Front Door backend pool based on their address. But from what I can tell you need to know the position of the backends in the list (the index) rather than pick from the address.
I am using az network front-door backend-pool backend list to get the list of backends, the response does not provide an index to use.
Can I remove a backend by the address, or some other identifier, rather than an index?
If I am forced to delete by index:
If I list the backends multiple times, can I be guaranteed that they always come back in the same order?
If I add a new backend to the pool, is it always the last in the list, and therefore the highest index?
If I delete the first backend (index = 1), does that index get replaced with the next in the list?
Azure CLI only provides a way with an index to remove a backend. But you can use the command below to get the index of a backend that you want to remove by its address:
backends=$(az network front-door backend-pool backend list --resource-group <resource group name> --front-door-name <front door name> --pool-name <pool name>)
echo $backends |jq
echo $backends | jq '[ .[] | .address == "stantest1016.blob.core.windows.net" ] | index(true) +1'
Result:
It is recommended that query backend list to get the latest list after add/remove backends.

Update ttl for all records in aerospike

I was stuck in a situation that I have initialised a namesapce with
default-ttl to 30 days. There was about 5 million data with that (30-day calculated) ttl-value. Actually, my requirement is that ttl should be zero(0), but It(ttl-30d) was kept with unaware or un-recognise.
So, Now I want to update prev(old) 5 million data with new ttl-value (Zero).
I've checked/tried "set-disable-eviction true", but it is not working, it is removing data according to (old)ttl-value.
How do I overcome out this? (and I want to retrieve the removed data, How can I?).
Someone help me.
First, eviction and expiration are two different mechanisms. You can disable evictions in various ways, such as the set-disable-eviction config parameter you've used. You cannot disable the cleanup of expired records. There's a good knowledge base FAQ What are Expiration, Eviction and Stop-Writes?. Unfortunately, the expired records that have been cleaned up are gone if their void time is in the past. If those records were merely evicted (i.e. removed before their void time due to crossing the namespace high-water mark for memory or disk) you can cold restart your node, and those records with a future TTL will come back. They won't return if either they were durably deleted or if their TTL is in the past (such records gets skipped).
As for resetting TTLs, the easiest way would be to do this through a record UDF that is applied to all the records in your namespace using a scan.
The UDF for your situation would be very simple:
ttl.lua
function to_zero_ttl(rec)
local rec_ttl = record.ttl(rec)
if rec_ttl > 0 then
record.set_ttl(rec, -1)
aerospike:update(rec)
end
end
In AQL:
$ aql
Aerospike Query Client
Version 3.12.0
C Client Version 4.1.4
Copyright 2012-2017 Aerospike. All rights reserved.
aql> register module './ttl.lua'
OK, 1 module added.
aql> execute ttl.to_zero_ttl() on test.foo
Using a Python script would be easier if you have more complex logic, with filters etc.
zero_ttl_operation = [operations.touch(-1)]
query = client.query(namespace, set_name)
query.add_ops(zero_ttl_operation)
policy = {}
job = query.execute_background(policy)
print(f'executing job {job}')
while True:
response = client.job_info(job, aerospike.JOB_SCAN, policy={'timeout': 60000})
print(f'job status: {response}')
if response['status'] != aerospike.JOB_STATUS_INPROGRESS:
break
time.sleep(0.5)
Aerospike v6 and Python SDK v7.

Why we use Redis and what is the right way to implement Redis with MySql in PHP?

I have large amount data in database, sometimes server not responding when execution of result is more than the server response time. So, is there any way to reduce the load of mysql server with redis and how to implement it with right way.
Redis supports a range of datatypes and you might wonder what a NOSQL key-value store has to do with datatypes? Well, these datatypes help developers store data in a meaningful way and can make data retrieval faster.
Connect with Redis in PHP
1) Download or get clone of predis library from github
2) We will require the Redis Autoloader and register it. Then we’ll wrap the client in a try catch block. The connection setting for connecting to Redis on a local server is different from connecting to a remote server.
require "predis/autoload.php";
PredisAutoloader::register();
try {
$redis = new PredisClient();
// This connection is for a remote server
/*
$redis = new PredisClient(array(
"scheme" => "tcp",
"host" => "153.202.124.2",
"port" => 6379
));
*/
}
catch (Exception $e) {
die($e->getMessage());
}
Now that we have successfully connected to the Redis server, let’s start using Redis.
Datatypes of Redis
Here are some of the datatypes supported by Redis:
String: Similar to Strings in PHP.
List: Similar to a single dimensional array in PHP. You can push, pop, shift and unshift, the elements that are placed in order or insertion FIFO (first in, first out).
Hash: Maps between string fields and string values. They are the perfect data type to represent objects (e.g.: A User with a number of fields like name, surname, and so forth).
Set: Similar to list, except that it has no order and each element may appear only once.
Sorted Set: Similar to Redis Sets with a unique feature of values stored in set. The difference is that each member of a Sorted Set is associated with score, used to order the set from the smallest score to the largest.
Others are bitmaps and hyperloglogs, but they will not be discussed in this article, as they are pretty dense.
Getter and Setter in PHP Redis (Predis)
In Redis, the most important commands are SET, GET and EXISTS. These commands are used to store, check, and retrieve data from a Redis server. Just like the commands, the Predis class can be used to perform Redis operations by methods with the same name as commands. For example:
// sets message to contian "Hello world"
$redis->set('message', 'Hello world');
// gets the value of message
$value = $redis->get('message');
// Hello world
print($value);
echo ($redis->exists('message')) ? "Oui" : "please populate the message key";
INCR and DECR are commands used to either decrease or increase a value.
$redis->set("counter", 0);
$redis->incr("counter"); // 1
$redis->incr("counter"); // 2
$redis->decr("counter"); // 1
$redis->set("counter", 0);
$redis->incrby("counter", 15); // 15
$redis->incrby("counter", 5); // 20
$redis->decrby("counter", 10); // 10
Working with List
There are a few basic Redis commands for working with lists and they are:
LPUSH: adds an element to the beginning of a list
RPUSH: add an element to the end of a list
LPOP: removes the first element from a list and returns it
RPOP: removes the last element from a list and returns it
LLEN: gets the length of a list
LRANGE: gets a range of elements from a list
Example as mentioned below
$redis->rpush("languages", "french"); // [french]
$redis->rpush("languages", "arabic"); // [french, arabic]
$redis->lpush("languages", "english"); // [english, french, arabic]
$redis->lpush("languages", "swedish"); // [swedish, english, french, arabic]
$redis->lpop("languages"); // [english, french, arabic]
$redis->rpop("languages"); // [english, french]
$redis->llen("languages"); // 2
$redis->lrange("languages", 0, -1); // returns all elements
$redis->lrange("languages", 0, 1); // [english, french]
How to Retrive data from Redis over to MySQL
You need to make Redis database as primary and Mysql database as slave, It means you have to fetch data first from Redis and if data not found/retrived then you have to get data from Mysql if data found then update Redis data so next time you can retrive data from redis. basic snapshot as mentioned below.
//Connect with Redis database
$data=get_data_redis($query_param);
if(empty($data))
{
//connect with mysql
$data=get_data_mysql($query_param);
if(!empty($data))
{
// update data into redis for that data
update_data_redis($data,$query_param);
}
}
How to Manage data in MySQL and Redis
In case of manage data into databaseyou have to update data into mysql database first and then update it into Redis database.
//insert data in mysql
$inserted= insert_data_mysql($data);
if($inserted)
{
insert_data_redis($data);
}
//update data in mysql
$updated= update_data_mysql($data,$query);
if($updated)
{
insert_data_redis($data,$query);
}
//delete data in mysql
$deleted= delete_data_mysql($query);
if($deleted)
{
delete_data_redis($query);
}
Redis can be used as a caching layer over the MYSQL queries.
Redis is an in-memory databases, which means, it will keep the data in memory and it can accessed faster as compare to query the data from MYSQL.
One sample use case would be:
Suppose you are creating a gaming listing site, and you have multiple games categories like, car games, bike games, kids games, etc. and to find the game mapping for each categories you have to query SQL database to get the list of the games for your game listing page. This is a scenario in which you can use Redis as a caching layer, and cache the SQL response in memcahce/Radis for X hours.
Exact steps:
First GET from Redis
if found return.
if not found in redis, then do the MYSQL query and before returning save the response in redis cache for the next time.
This will offload a hell lot of queries from the MYSQL to in-memory redis db.
if(data in redis){
step 1: return data;
}else{
step 1: query MYSQL
step 2: Save in redis
step 3: return data
}
Some points to consider before choosing the queries to save in redis are:
Only static queries should be choosen, means those whose data is not user specific.
Choose the slow static queries to further improve the MYSQL performance.
Hope it will help.

Partitioning data across hosts in Ansible (access "index" of host in task?)

I am trying to use Ansible to do some parallel computation. My data is trivially parallelizable, I just need to split the file across my hosts (EC2 instances). Is there a canonical way to do this?
The next best thing would be to have a counter that increments for each host. Assuming I have already split my data into my number of workers, I would like to be able to say within each worker task:
- file: src=data/users-{{host_index}}.csv dest=/mnt/users.csv`.
Then, each worker can process their copy of users.csv with a separate script, that is agnostic to which set of users they have. Is there any way to get this counter index?
I am a beginner to Ansible, so I wonder whether I am overlooking a simple module or idiom, either in Ansible or Jinja. Thanks in advance.
It turns out I have access to a variable called ami_launch_index inside of the ec2_facts module that gives me a zero-indexed unique ID to each EC2 instance. Here is the code for copying over files with numerical suffixes to their corresponding EC2 instances:
tasks:
- name: Gather ec2 facts
action: ec2_facts
register: facts
- name: Share data to nodes
copy: src=data/websites-{{facts.ansible_facts.ansible_ec2_ami_launch_index}}.txt dest=/mnt/websites.txt
The copy line produces the following for the src values:
data/websites-1.txt
data/websites-0.txt
data/websites-2.txt
(There is no guarantee that the hosts will iterate in ami_launch_index order)

Uncommitted transactions in Plone + SqlAlchemy + MySql

We have a hybrid web application integrating a MySql db with Plone (last upgrade was to Plone 4.0), using collective.tin, collective.lead and SqlAlchemy.
Ok, I know that collective.tin never was released and collective.lead has been superseded; however all things work (almost) perfectly since a few years.
Recently we experienced a very strange behaviour and are looking for help in order to understand it.
Among others, we have 2 Plone content types, say A and B, defined by subclassing collective.tin, and the corresponding innodb MySql tables; rows of B have a foreign key towards A.
In the time span of 15-20 minutes, 2 different users created 3 A objects and some 10-20 B objects that weren't committed to MySql but were indexed by Plone; queries I executed with a MySql client from the linux shell weren't able to find those A rows (didn't look for B rows); however, queries executed through the web application (the aforementioned components stack) by those 2 users, and also by other users, occasionally were still finding and correctly visualizing some of those 3 A objects.
Only after I restarted the Zope instance, it was possible to resume normal activity from the Plone web interface; 3 A rows and many B rows were still missing from the MySql db, but the autoincrement counter showed the expected increment; I had to remove 3 invalid brains for A objects from the Plone index (didn't worry for B objects).
Any suggestion on possible causes and on how to investigate the problem?
We had the exact same problem with sqlalchemy 0.4; the session would get out of sync with the actual database contents. The problem was somewhat masked in our case because users were sent to specific backends in the cluster through session affinity. If the affinity was lost suddenly messages had disappeared. The exact details are a little hazy, because I cannot locate the correct (ancient) revision history of the fix I put in place.
From what I can glean from context is that the session identity map prevents the session from requiring the database for objects it retrieved before. It thus won't see changes made to these objects in different sessions.
The fix is to call .expire_all() on the session after each and every commit or rollback; SQLAlchemy 0.5 and up does this automatically (autoexpire=True on the session, now called expire_on_commit I believe), but for 0.4 you'll need to register a SessionExtension to do this for you.
Lucky for you, we also use collective.lead for this project, so my fix is your fix:
# The identity map should be flushed on commit.
# SQLAlchemy 0.5 does this properly, but in 0.4 we need to do this via
# a SesssionExtension.
from sqlalchemy import __version__
if __version__[:3] == '0.4':
from sqlalchemy.orm.session import SessionExtension
class ExpireAllSessionExtension(SessionExtension):
def after_commit(self, session):
"""Expire the identity-map on commit"""
session.expire_all()
def after_rollback(self, session):
"""Expire the identity-map on rollback"""
session.expire_all()
def installExtension():
# Patch collective.lead.database to let us install the extension
# on the session created there.
from collective.lead.database import Database
old_session = Database.session.fget
def session(self):
session = old_session(self)
if session.extension is None:
session.extension = ExpireAllSessionExtension()
return session
Database.session = property(session)
else:
def installExtension():
pass
When defining the mapper, you install this extension with:
from .sessionexpiration import installExtension
# Ensure that sessions get properly expired on commit and rollback.
installExtension()