Temporary Muc rooms not getting deleted even after everyone leaves the room - ejabberd

I am using ejabberd-18.09. According to the documentation I think that a temporary room should get deleted once every member of the room leaves the room.
But in our I can see on ejabberd dashboard that rooms created even a month ago are still present on the server.
I can see this on the ejabberd dashboard
JabberID -> myroom#conference.example.com
participants -> 0
Last message -> A long time ago
Public -> true
Persistent -> false
Logging -> false
Justcreated -> true
and there are hundereds of rooms with similar info.
My room configuration is like this
host: "conference.#HOST#"
access:
- allow
access_admin:
- allow: admin
access_create: all
access_persistent: muc_create
default_room_options:
allow_change_subj: false
allow_query_users: true
allow_private_messages: true
members_by_default: true
anonymous: true
max_users: 10
I am a bit lost here. Why is it happening ?
Can anyone help me out here please ?
EDIT
I am using mod_muc:create_room/5 to create the room. Then I send direct Invitation to other users from the ejabberd to other users, which they accept and then join the chat room.
For destroying the rooms the client send destroy packet to destroy the chat in regular scenario, but if our client fails to send the destroy packet after a certain time of room creation(for different reasons like app being in background, a phone call etc etc) they just leave the room and in such cases I was hoping that the transient rooms would play their part where they get destroyed after everyone leaves.
The are the logs in the ejabberd.log file
<0.23497.0>#mod_muc_room:init:137 Created MUC room myroom#conference.example.com by user1#example.com/xiaomi
<0.23476.0>#ejabberd_c2s:process_terminated:262 (tcp|<0.23476.0>) Closing c2s session for user1#example.com/xiaomi: Stream reset by peer
2019-08-26 17:15:13.201 [info] <0.23497.0>#mod_muc_room:close_room_if_temporary_and_empty:1120 Destroyed MUC room myroom#conference.example.com because it's temporary and empty
2019-08-26 17:15:13.201 [info] <0.23497.0>#mod_muc_room:terminate:703 Stopping MUC room myroom#conference.example.com
In the ejabberd dashboard there are some rooms with values
JabberID -> myroom1#conference.example.com
participants -> 0
Last message -> A long time ago
Public -> true
Persistent -> false
Logging -> false
Justcreated -> true
While there are some as
JabberID -> myroom2#conference.example.com
participants -> 1
Last message -> A long time ago
Public -> true
Persistent -> false
Logging -> false
Justcreated -> false
mostly the pattern is that the rooms with 0 ocupants have Justcreated as true while the once with 1 participant left has Justcreated as false.

According to the documentation I think that a temporary room should
get deleted once every member of the room leaves the room.
Right.In the ejabberd log file, when a user joins a new room:
18:04:06.637 [info] Created MUC room myroom#conference.localhost by user3#localhost/tka1
User leaves the room:
18:04:11.143 [info] Destroyed MUC room myroom#conference.localhost because it's temporary and empty
18:04:11.144 [info] Stopping MUC room myroom#conference.localhost
What do you see in ejabberd.log when a room is created and then the user leaves it?
But in our I can see on ejabberd dashboard that rooms created even a
month ago are still present on the server.
How are the rooms created?
If you create one using a desktop Jabber client (for example Gajim, Psi, or other), does that room also keep alive after user leaves?
Justcreated -> true
Umm, this is weird. Just created is set to true when the room has just been created. Immediately the code to join the first user is executed, and that code sets Justcreated to the timestamp of that user.
All your empty rooms and temporary rooms have Justcreated true?

I am using mod_muc:create_room/5 to create the room. Then I send
direct Invitation to other users from the ejabberd to other users
Better use the API provided for admins:
$ ejabberdctl create_room room2 conference.localhost localhost
$ ejabberdctl send_direct_invitation room2 conference.localhost "" "Join this cool room" user4#localhost
BTW; I found a problem in ejabberd, try applying this patch, so the command create_room doesn't create a forced persistent room:
diff --git a/src/mod_muc_admin.erl b/src/mod_muc_admin.erl
index 805e72481..9aed4a017 100644
--- a/src/mod_muc_admin.erl
+++ b/src/mod_muc_admin.erl
## -623,10 +623,7 ## justcreated_to_binary(J) when is_atom(J) ->
%% ok | error
%% #doc Create a room immediately with the default options.
create_room(Name1, Host1, ServerHost) ->
- case create_room_with_opts(Name1, Host1, ServerHost, []) of
- ok -> change_room_option(Name1, Host1, <<"persistent">>, <<"true">>);
- Error -> Error
- end.
+ create_room_with_opts(Name1, Host1, ServerHost, []).
create_room_with_opts(Name1, Host1, ServerHost, CustomRoomOpts) ->
true = (error /= (Name = jid:nodeprep(Name1))),

Related

SQL Error [91016] [22000]: Remote file 'stage_name/java_udf.jar' was not found

Created a jar file and use it as function. I created it with same user role for both function and snowflake stage. Uploaded the jar file to the stage using snowsql.
When I run the the following command in snowflake ui (browser), it works.
ls #~/stage_name
However, when I use the service account with similar role that I have using DBeaver. It does not work. It comes up empty.
Same thing with the function, it works in the Snowflake UI, but not in DBeaver. Please note that both users have the same role. Also, added grant "all privileges" and "usage" (which be part of all) to the roles I want them to use. But again, it does not work. It shows error below
**> SQL Error [91016] [22000]: Remote file 'stage_name/java_udf.jar' was
not found. If you are running a copy command, please make sure files
are not deleted when they are being loaded or files are not being
loaded into two different tables concurrently with auto purge option.**
However, when I run the function in Snowflake UI using my user account, it works fine. Please note my user account has the same role as the service account. But it doesn't work on the service account. Any ideas?
Followed steps here in the documentation:
https://docs.snowflake.com/en/developer-guide/udf/java/udf-java-creating.html#label-udf-java-in-line-examples
So I think I know the issue.
The stage could be shared using the same role. But the files uploaded in stage are not. They belong to the users who uploaded them. I just loaded exactly the same file a the same internal stage. And they did not overwrite each other:
Service Account:
name: xxxxxxx.jar
size: 389568
md5: be8b59593ae8c4b8baebaa8474bda0a7
last_modified: Tue, 8 Feb 2022 03:26:29 GMT
User account:
namne: xxxxxxx.jar
size: 389568
md5: 0c4d85a3a6581fa3007f0a4113570dbc
last_modified: Mon, 7 Feb 2022 17:03:58 GMT
~# is the USER LOCAL stoage only area.
thus, unless the automation is the "same" user, it will not be able to access it.
This should be provable by getting the same "run" command that works from the WebUI for your user, and logging in as the automation user, and seeing you get the error there.
Reading that link document, full you can see that you should use a table storage, or a named storage, which you can grant access to the role you both have.
working proof:
on user simeon:
create or replace stage my_stage;
create or replace function echo_varchar(x varchar)
returns varchar
language java
called on null input
handler='TestFunc.echo_varchar'
target_path='#my_stage/testfunc.jar'
as
'class TestFunc {
public static String echo_varchar(String x) {
return x;
}
}';
create role my_role;
grant usage on function echo_varchar(varchar) to my_role;
grant all on stage my_stage to my_role;
grant usage on database test to my_role;
grant usage on schema not_test to my_role;
grant usage on warehouse compute_wh to my_role;
then I test it:
use role my_role;
select current_user(), current_role();
/*CURRENT_USER() CURRENT_ROLE()
SIMEON MY_ROLE*/
select test.not_test.echo_varchar('Hello');
/*TEST.NOT_TEST.ECHO_VARCHAR('HELLO')
Hello*/
I created a new user test_two set them to role my_role
on user test_two:
use role my_role;
select current_user(), current_role();
/*CURRENT_USER() CURRENT_ROLE()
TEST_TWO MY_ROLE*/
select test.not_test.echo_varchar('Hello');
/*TEST.NOT_TEST.ECHO_VARCHAR('HELLO')
Hello*/
Ok so a function put on a accessible stage works, lets put another on my user simeon local stage ~#
on user Simeon:
returns varchar
language java
called on null input
handler='TestFuncB.echo_varcharb'
target_path='#~/testfuncb.jar'
as
'class TestFuncB {
public static String echo_varcharb(String x) {
return x;
}
}';
grant usage on function echo_varcharb(varchar) to my_role;
select test.not_test.echo_varcharb('Hello');
/*TEST.NOT_TEST.ECHO_VARCHARB('HELLO')
Hello*/
back on user test_two:
select test.not_test.echo_varcharb('Hello');
/*Remote file 'testfuncb.jar' was not found. If you are running a copy command, please make sure files are not deleted when they are being loaded or files are not being loaded into two different tables concurrently with auto purge option.*/

How to set Typo3 v10 site config for the equivalent of wildcards but with one exception?

How can I set the Typo3 V10 Site Configuration to do this:
Everything incoming go to one site root page:
Except this one subdomain, which goes to a separate root page:
I have a web app based on Typo3 8.7 that I'm trying to upgrade to v10. In the 8.7 app, each customer organisation has a unique subdomain - school1.webapp.com, anotherschool.webapp.com etc., all pointing to the same typo3 site root page. Each time I need to create a new customer, all I have to do is to add a new sys_domain record and a custom plugin picks up the current sys_domain record as the means to separate customer data. A wildcard sys_domain record of *.webapp.com the picks up any misspellings and redirects to a separate page.
The one exception is auth.webapp.com which handles oauth authentication for all customers and goes to a different site root page.
This allows me to add new customers with just a simple form, which adds a new sys_domain record and job finished.
I now need to upgrade to Typo3 v10. I can detect the incoming subdomain to split between customer data easily enough, but I'm having problems with the new Site Configuration tool. All I need is to route auth.webapp.com to one site root page and everything else incoming to another.
My current setup seems to work for getting everything to route to the site root
- Entry point /
- Variant Base: https://%env(HTTP_HOST)%/
Condition: getenv("HTTP_HOST") == "*"
But if I create a second site entry for the auth.webapp.com domain, I just get an FE error of
"Page Not Found - The page did not exist or was inaccessible. Reason: The requested page does not exist"
Entry point https://auth.webapp.com
An entry point of /auth.webapp.com/ results in this subdomain going to the main customers entry point, even though the yaml entry says its pointed to the correct start point.
MAIN SITE - All incoming subdomains except auth.webapp.com
base: /
baseVariants:
-
base: 'https://%env(HTTP_HOST)%/'
condition: 'getenv("HTTP_HOST") == "*"'
errorHandling:
-
errorCode: 404
errorHandler: Page
errorContentSource: 't3://page?uid=13'
-
errorCode: 403
errorHandler: Page
errorContentSource: 't3://page?uid=1'
-
errorCode: 500
errorHandler: Page
errorContentSource: 't3://page?uid=14'
flux_content_types: ''
flux_page_templates: ''
languages:
-
title: English
enabled: true
base: /
typo3Language: default
locale: en_GB.UTF-8
iso-639-1: en
websiteTitle: 'Website Title Name'
navigationTitle: ''
hreflang: en-GB
direction: ''
flag: gb
languageId: 0
rootPageId: 1
websiteTitle: 'Website Title Name'
AUTHENTICATION SITE - just auth.webapp.com
base: 'https://auth.webapp.com/'
flux_content_types: ''
flux_page_templates: ''
languages:
-
title: English
enabled: true
base: /
typo3Language: default
locale: en_GB.UTF-8
iso-639-1: en
websiteTitle: ''
navigationTitle: ''
hreflang: ''
direction: ''
flag: gb-eng
languageId: 0
rootPageId: 11
websiteTitle: 'Website Title'
You should make use of environment variables provided by your webserver in this case.
Setting i.e. AUTHENTICATED as an indicator for anything else than auth.webapp.com would help to filter the base variants for the common configuration condition and make sure that auth.webapp.com would be skipped there.
Jo's answer did the job. I'm posting a little more here to help others.
==============
.htaccess file
==============
<If "%{HTTP_HOST} != 'unique\.domain\.com'">
SetEnv SPECIALNAME_ALLOW allow
</If>
<If "%{HTTP_HOST} == 'unique\.domain\.com'">
SetEnv SPECIALNAME_ALLOW skip
</If>
==============
Typo3 SITE entry for the unique domain
==============
Entry Point - https://unique.domain.com/
Variant Base - https://unique.domain.com/
Variant Condition - getenv("SPECIALNAME_ALLOW ") == "skip"
==============
Typo3 SITE entry for everything else
==============
Entry Point - /
Variant Base - /
Variant Condition - getenv("SPECIALNAME_ALLOW ") == "allow"

SQl Database back up showing error "Exception of type "System.OutofMemoryException" was thrown.(mscorlib)"

I need to take a backup of a SQL database with data. I select
Tasks -> Generate Scripts... -> Next -> Next and in Table view option I changed Script Data FALSE to TRUE -> Next -> Select all -> Script to New Query Window -> Finish.
But it end up with an error:
"Exception of type "System.OutofMemoryException" was thrown.(mscorlib)".
I have checked the memory space of my drives. C drive is having more than 10 GB and D drive is having more than 3 GB but the database is only 500MB. That error is showing for 1 particular table "TBL_SUMM_MENUOPTION" which is an empty table. May I know how to fix this issue? How to take that database backup with out this error?
Screenshot for better understanding:
As #Alejandro stated in his comment, Instead of taking backup using Tasks -> Generate Scripts... -> Next -> Next and in Table view option I changed Script Data FALSE to TRUE -> Next -> Select all -> Script to New Query Window -> Finish. I took backup by using Tasks -> Backup ->Database which is very simple.

Getting time intervals in ruby

I have an external service that allows me to log users into my website.
To avoid getting kicked out of it for overuse I use a MySQL table on the following form that caches user accesses:
username (STRING) | last access (TIMESTAMP) | has access? (TINYINT(1) - BOOLEAN)
If the user had access on a given time I trust he has access and don't query the service during a day, that is
query_again = !user["last access"].between?(Time.now, 1.day.ago)
This always returns true for some reason, any help with this logic?
In ranges (which you effectively use here), it is generally expected that the lower number is the start and the higher number is the end. Thus, it should work for you if you just switch the condition in your between? call
query_again = !user["last access"].between?(1.day.ago, Time.now)
You can test this yourself easily in IRB:
1.hour.ago.between?(Time.now, 1.day.ago)
# => false
1.hour.ago.between?(1.day.ago, Time.now)
# => true

Anyway to get dkims records for verifying ses domain in boto?

Tinkering around with verifying a couple of domains and found the manual process rather tedius. My DNS controller offers API access so I figured why not script the whole thing.
Trick is I can't figure out how to access the required TXT & CNAME records for DKIMS verification from boto, when I punch in
dkims = conn.verify_domain_dkim('DOMAIN.COM')
it adds DOMAIN.COM to the list of domains pending verification but doesn't provide the needed records, the returned value of dkims is
{'VerifyDomainDkimResponse': {
'ResponseMetadata': {'RequestId': 'REQUEST_ID_STRING'},
'VerifyDomainDkimResult': {'DkimTokens': {
'member': 'DKIMS_TOKEN_STRING'}}}}
Is there some undocumented way to take the REQUEST_ID or TOKEN_STRING to pull up these records?
UPDATE
If you have an aws account you can see the records I'm after at
https://console.aws.amazon.com/ses/home?region=us-west-2#verified-senders:domain
tab: Details:: Record Type: TXT (Text)
tab: DKIM:: DNS Record 1, 2, 3
these are the records required to add to the DNS controller to validate & allow DKIM signatures to take place
This is how I do it with python.
DOMINIO = 'mydomain.com'
from boto3 import Session
session = Session(
aws_access_key_id=MY_AWS_ACCESS_KEY_ID,
aws_secret_access_key=MY_AWS_SECRET_ACCESS_KEY,
region_name=MY_AWS_REGION_NAME)
client = session.client('ses')
# gets VerificationToken for the domain, that will be used to add a TXT record to the DNS
result = client.verify_domain_identity(Domain=DOMINIO)
txt = result.get('VerificationToken')
# gets DKIM tokens that will be used to add 3 CNAME records
result = client.verify_domain_dkim(Domain=DOMINIO)
dkim_tokens = result.get('DkimTokens') # this is a list
At the end of the code, you will have "txt" and "dkim_tokens" variables, a string and a list respectively.
You will need to add a TXT record to your dns, where the host name is "_amazonses" and the value is the value of "txt" variable.
Also you will need to add 3 CNAME records to your dns, one for each token present in "dkim_tokens" list, where the host name of each record is of the form of [dkimtoken]._domainkey and the target is [dkimtoken].dkim.amazonses.com
After adding the dns records, after some minutes (maybe a couple of hours), Amazon will detect and verify the domain, and will send you an email notification. After that, you can enable Dkim signature by doing this call:
client.set_identity_dkim_enabled(Identity=DOMINIO, DkimEnabled=True)
The methods used here are verify_domain_identity, verify_domain_dkim and set_identity_dkim_enabled.
You may also want to take a look a get_identity_verification_attributes and get_identity_dkim_attributes.
I think the get_identity_dkim_attributes method will return the information you are looking for. You pass in the domain name(s) you are interested in and it returns the status for that identity as well as the DKIM tokens.