smbtree sometimes list the newtwork tree and sometime is doesnt? - samba

why smbtree sometimes list the network tree and sometimes is doesn't ?
smbtree
LIST all computers on network -- Good
Now again i tires on same machine without making any changes in configuration
smbtree
BLANK --Bad why?
When i run smbtree on my Linux terminal some times it is good it list all the network,domain and share but some time its bad it doesn't list any things?
Note: I have not made any changes in smb.config file
why some time its bad?
What does smbtree return when its fail?

You firewall is blocking the port 445 and 139 which is used by smb .Configure you iptables to get access to this port and you will be good.

Related

How to force close Access application when user lost connection to back-end

The Question:
Is there some way to force close access so it doesn't need access to the back-end server in order to exit?
The Situation:
I have an Access 2016 DB. The back-end is on a networked share drive which is only accessible when connected to the lan or on VPN. On load there is a ping test to the server, if found it copies the tables to local tables, if not, it just tells the user can't connect and continues on using the old data. The users travel a lot and can't always be on the VPN so the idea is that the data they have isn't more than a few days old. BTW, did I mention the users are only consumers of information and not contributors so I don't care that they can't write to the back-end. The tables contain a few 100k records, the application just puts it in nice easy to search and cross-reference reports.
The Problem:
While this loads and runs really nicely regardless of them being connected to the lan or not, it will NOT close if they don't have a connection to the server. It doesn't produce an error which I could easily handle, it just hangs. Task Manager won't even close it.
Attempted Solutions:
I tried to unlink the tables and just use a temporary connection to the backend to load the tables when I need them at the beginning, however this meant the user was prompted by the Microsoft Trust Center about 8 times every single time they loaded this unless I have each of them actually open the back-end DB themselves, give them the password to do that, and none of that is practical.
Access doesn't play well with remote BE..if you want to be on the Remote Side with Access you have 2 options :
Connect via RDS..the user connects to the server via Remote Desktop ..everything is "local" ..so now issues on lost connections ...as long the RDP connection hold everything is smooth and more importantly you don't have BE disconnects that cause corruption or loosing data (hint : Using the RemoteApp technology it will seem to the end user like he/she is working locally...i am using it and its great)
Switch BE...as i said , is not wise to use Access BE via remote connection..in order to do that switching to MsSQL/MySQL/PostGre ...etc will give you the true remote connection capability
After playing with all the settings for a few days, I finally figured out what my problem was.
In an effort to test different settings to see if I could reduce file size at one point I turned on "clear cache on exit" in the Current Database settings. Turning this off fixed the problem. I had forgotten that was on, so it turned out to not be a programming issue after all.

How to toughen an MS Access database against frequent network disconnections

My team and I utilize MS Access databases across a network that disconnects frequently. Whenever a disconnect happens, there's a cascade of failure messages in Access and any records mid-entry are lost.
We know what's causing this, but it's beyond the level of my authority to fix. It's related to Windows 10 re-mapping the network drive whenever there's a group policy update, causing it to 'lose' the network drive for a split-second; long enough to disconnect the database.
As resolving the network disconnects will involve the IT department escalating it to the national level (Government computer system), I need a fix "now" so my form files don't generate a dozen errors and need to be restarted every time this happens.
What settings or code could I use to harden the forms files against network disconnects?
EDit: To answer questions
The data is kept in a separate file from the forms, allowing multiple people to work on the database at the same time.
I believe it's pointing to a drive letter for where the data file is. I don't know how to setup a server address location. My method of connecting was to browse to the file.

Nginx Vs Apache to solve load isseu on website

So Have a web application that has 10-12 pages with many POST/ GET DB Calls. We usually have a apache crash/other problem when site traffic results to 1000 or so (concurrent users) which is very small number, we have updated server with good RAM and resources. When our system admin guy do load testing on blitz and other custom script and is suggesting to move away from Apache. Some things does not make sense to me. Like Apache is not too bad to handle few thousand of concurrent users considering we have cloudflare for caching. Here is what he suggested:
replacement of Apache+mod_fcgi with Nginx+php-fpm which can make the server handle much more users, and then test it.
or
2. For testing: Need 10-20 servers to run a scenario from. Basically, what is needed is a more complex blitz.io analogue. create one server, which takes all those hours, then just clone it in the cloud and pay for about 1 hour of testing multiplied by the number of servers needed.
Once again there are many DB calls anf HT access. ALso what makes Nginx better than apache in this case?
I would check this comparison first. Basically, nginx is event based, so it's able to handle more requests concurrently. However, as the MySQL DB seems to be the choke point here, it's very possible that nginx wouldn't solve all your problems. Perhaps moving to a NoSQL kind of database, that's better at scaling horizontally, would help (if that's feasible).

How to benchmark and optimize a really database-intensive Rails action?

There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago.  Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!

Can a webserver determine if its the active node of an HA failover system without hard coding anything on the server itself?

I can think of a few hacks using ping, the box name, and the HA shared name but I think that they are leading to data leakage.
Should a box even know its part of an HA cluster or what that cluster name is? Is this more a function of DNS? Is there some API exposed for boxes to join an HA cluster and request the id of the currently active node?
I want to differentiate between the inactive node and active node in alerting mechanisms for a running program. If the active node is alerting I want to hit a pager and on the inactive node I want to send an email. Pushing the determination into the alerting layer moves the same problem elsewhere.
EASY SOLUTION: Polling the server from an external agent that connects through the network makes any shell game of who is the active node a moot point. To clarify this the only thing that will page is the remote agent monitoring the real. Each box can send emails all day long for all I care.
It really depends on the HA system you're using.
For example, if your system uses a shared IP and the traffic is managed by some hardware box, then it can be hard to determine if a certain box is a master or slave. That will depend on a specific solution really... As long as you can add a custom script to the supervisor, you should be ok - for example the controller can ping a daemon on the master server every second. In the alerting script, simply check if the time of the last ping < 2 sec...
If your system doesn't have a supervisor / controller node, but each node tries to determine the state itself, you can have more problems. If a split brain occurs, you can end up with both slaves or both masters, so your alerting software will be wrong in both cases. Gadgets that can ensure only one live node (STONITH and others) could help.
On the other hand, in the second scenario, if the HA software works on both hosts properly, you should be able to obtain the master/slave information straight from it. It has to know its own state at any time, because it's one of its main functions. In most HA solutions you should be able to either get the current state, or add some code to run when the state changes. Heartbeat offers both.
I wouldn't worry about the edge cases like a split brain though. Almost any situation when you lose connection between the clustered nodes will be more important than the stuff that happens on the separate nodes :)
If the thing you care about is really logging / alerting only, then ideally you could have a separate logger box which gets all the information about the current network / cluster status. External box will probably have better idea how to deal with the situation. If your cluster gets dos'ed / disconnected from the network / loses power, you won't get any alert. A redundant pair of independent monitors can save you from that.
I'm not sure why you mentioned DNS - due to its refresh time it shouldn't be a source of any "real-time" cluster information.
One way is to get the box to export it's idea of whether it is active into your monitoring. From there you can predicate paging/emailing on this status (with a race condition around failover), and alert on none/too many systems believing they are active.
Another option is to monitor the active system via a DNS alias (or some other method to address the active system) and page on that. Then also monitor all the systems, both active and inactive, and email on that. This will cause duplicate alerts for the active system, but that's probably okay.
It's hard to be more specific without knowing more about your setup.
As a rule, the machines in a HA cluster shouldn't really know which one is active. There's one exception, mind, and that's with cronjobs. At work, we have a HA cluster on top of which some rather important services run. Some of those use services have cronjobs, and we only want them running on the active box. To do that, we use this shell script:
#!/bin/sh
HA_CLUSTER_IP=0.0.0.0
if ip addr | grep $HA_CLUSTER_IP >/dev/null; then
eval "$#"
fi
(Note that this is running on Debian.) What this does is check to see if the current box is the active one within the cluster (replace 0.0.0.0 with the external IP of your HA cluster), and if so, executes the command passed in as arguments to the script. This ensures that one and only one box is ever actually executing the cronjobs.
Other than that, there's really no reasons I can think of why you'd need to know which box is the active one.
UPDATE: Our HA cluster uses Heartbeat to assign the cluster's external IP address as a secondary address to the active machine in the cluster. Programmatically, you can check to see if your machine is the current active box by calling gethostbyname(), and iterating over the data returned until you either get to the end or you find the cluster's IP in the list.
Without hard-coding.... ? I assume you mean some native heartbeat query, not sure. However, you could use ifconfig, HA creates a virtual interface on whatever interface it is configured to run on. For instance if HA was configured on eth0 then it would create a virtual interface of eth0:0, but only on the active node.
Therefore you could do a simple query of the ifconfig output to determine if the server twas the active node or not, for example if eth0 was the configured interface:
ACTIVE_NODE=`ifconfig | grep -c 'eth0:0'`
That will set the $ACTIVE_NODE variable to 1 (for active) and 0 (if standby). Hope that may help.
http://www.of-networks.co.uk