When I try to deploy my application on JBoss 5.1 Spring 3 MVC throw me this stack trace: http://pastebin.com/Aah386PJ
Telling me that I have two definition of the same bean in two different packages. The thing is I don't have this IntershipConfigurationController in the controller package, but I have it in controller.internshipConfiguration. I previously add it under the root of controller but I deleted it from SVN and it doesn't appear anymore in the tree.
I cleaned JBoss, Eclipse's project, tried to redeploy it, to restart JBoss, Eclipse, etc. but I can't get this project working while my mates with the same repository can run it with no issue.
I don't know what to do this is really annoying.
I know this can be frustrating sometimes but you might want to know why this could happen. Spring annotation based ant path matcher checks for the class-path with a URI pattern to search for controllers or components. In your case the class-path either has a jar having the previous version of your class or some referencing .class file in your build path.
Make sure you have your project cleaned up and if possible disconnect
from SVN and download the project again.
You could also try CTRL+SHIFT+T to see if your controller is
referenced from any other library.
I finally deleted my JBoss folder, removed the projects from JBoss in Eclipse, extract a proper JBoss, made a clean on JBoss in Eclipse and then redeployed my projects and it's now working.
Nonetheless it's a really strange issue...
Edit :
It was in fact because the build folder at the root of my project was versioned and the old classes were still present.
Now the folder is ignored and removed from the SVN and I've deleted the old classes from my FS.
Related
I have a solution with many projects, and we are migrating to .NET SDK style projects, but for now we have a mix of .NET framework style projects and .NET SDK style projects.
We are also migrating to GitHub actions, and this solution was building without errors previously, but the restore action started failing when dotnet was updated from 6.0.300 to 6.0.400 (update: I tried targeting 6.0.300 specifically in the setup-dotnet action, but it's throwing the same errors, so I'm not sure what changed to cause it to fail like this when it was working before.)
I updated our local action server to 6.0.400, and when I run the command dotnet restore ./path/to/solution.sln it is restoring the NuGet packages for just the .NET SDK style projects as expected.
dotnet is installed with this GitHub action
- name: Setup .NET
uses: actions/setup-dotnet#v2
with:
dotnet-version: 6.0.x
and restore is being called with this GitHub action
- name: Restore dependencies
run: dotnet restore ${{env.SOLUTION_FILE_PATH}}
and when dotnet restore is being run from the GitHub action, I am getting the following error error MSB4057: The target "Restore" does not exist in the project. for all of the .NET SDK style projects. It's as if it's trying to restore NuGet packages using the older .NET Framework style NuGet packages. This is very different than what I've seen before and is unexpected. I have a separate action for calling nuget restore ./path/to/solution.sln for restoring packages for the .NET framework style projects, and I'm expecting dotnet restore to only restore the .NET SDK style projects.
Has anyone else run into similar problems with dotnet 6.0.400? Are there better options for restoring NuGet packages in GitHub actions?
I'm not really sure where to look next because running the command line commands locally work exactly how I would expect them to, and it only behaves oddly when getting called from GitHub actions.
Update:
I've been able to reproduce the failure locally by running the dotnet version that is being installed locally as part of actions/setup-dotnet#v2
If I run dotnet restore ... from the global install location C:\Program Files\dotnet\dotnet.exe then I get the following output which is what I expect
Determining projects to restore...
Restored C:\actions-runner\_work\MySolution\MySolution\src\FirstSdkProject\FirstSdkProject.csproj (in 335 ms).
Restored C:\actions-runner\_work\MySolution\MySolution\src\SecondSdkProject\SecondSdkProject.csproj (
in 357 ms).
If I restore from the locally installed dotnet at C:\Users\MyUser\AppData\Local\Microsoft\dotnet then I get the unexpected output that I'm getting in the GitHub action
Determining projects to restore...
Determining projects to restore...
C:\actions-runner\_work\MySolution\MySolution\src\FirstSdkProject\FirstSdkProject.csproj : error MSB4057: The target "Restore" does not exist in the project.
Nothing to do. None of the projects specified contain packages to restore.
Determining projects to restore...
Nothing to do. None of the projects specified contain packages to restore.
Nothing to do. None of the projects specified contain packages to restore.
Determining projects to restore...
C:\actions-runner\_work\MySolution\MySolution\src\SecondSdkProject\SecondSdkProject.csproj : error MSB4057: The target "Restore" does not exist in the project.
Nothing to do. None of the projects specified contain packages to restore.
Nothing to do. None of the projects specified contain packages to restore.
Determining projects to restore...
Nothing to do. None of the projects specified contain packages to restore.
Nothing to do. None of the projects specified contain packages to restore.
Nothing to do. None of the projects specified contain packages to restore.
Nothing to do. None of the projects specified contain packages to restore.
Comparing the information on dotnet.exe, they are the exact same version of dotnet and using a file compare program they are binary the same, and the folders they are in are seemingly the same as well with only a few minor differences. Why would running restore have two very different outcomes just running from different locations?
One of the main differences between a .NET Framework project and a .NET SDK project is how NuGet package references are managed. With .NET Framework projects, NuGet references are managed in packages.config. To build and restore including packages.config references you need the following
msbuild -t:build -restore -p:RestorePackagesConfig=true ./path/to/solution.sln
If you don't have mixed projects and they all follow the .NET SDK csproj format, then you won't have any packages.config references and you can build and restore with this.
msbuild -t:build -restore ./path/to/solution.sln
The option RestorePackagesConfig is only available on msbuild 16.5+ https://learn.microsoft.com/en-us/nuget/reference/msbuild-targets#restoring-packagereference-and-packagesconfig-projects-with-msbuild
Having migrated from Spring Boot 1.5.19 to Spring Boot 2.0.4, we are encountering problems with the build on jenkins. Using gradle 4.2.1. We think the behavioural changes in the spring boot gradle plugin between the versions is causing our issue.
The spring Boot gradle plugin has also been updated from 1.5.19 to 2.0.4
Our target artefact naming convention is :
project-name-<version>-<branch>-RELEASE.jar
The jar file gets generated correctly, having specified the following in the build.gradle file.
bootJar {
baseName = 'project_name'
}
The problem occurs when the uploadArchives task is executed. This task looks for an artefact with the following naming convention.
<path-folder-name>-<version>-<branch>-RELEASE.jar
where is the name of the folder path on the jenkins.
It doesn’t seem to be picking up the baseName config.
The build pipeline runs successfully when we don’t perform the uploadArchives task. Also, prior to the Spring Boot upgrade, this was not an issue.
Is there a way to get uploadArchives task to look for the generated jar file name?
I resolved this eventually by adding a settings.gradle file and defining a a root project name in that
rootProject.name = "project_name"
I think the upgrading of the spring Boot gradle plugin must have changed the way the project was being defined.
The 1.5.* version seemed to be taking the project name from the baseName in the Jar task, but the newer version was using the folder name where the app sits.
That was fun
Steps to reproduce a problem:
Download and install dotnet core 3.0
Create a new project: dotnet new webapp -n MyApp
Run app dotnet run
Navigate to http://localhost:5000/Privacy
Edit Privacy page MyApp\Pages\Privacy.cshtml
Refresh
The changes do not get picked up by the toolkit and old page is rendered.
Same flow for dotnet core 2.2 (freshly installed) results in an updated page.
Is there a flag, which needs to be set somewhere within config to get autodetect working for 3.0 or is this a bug?
Use the dotnet CLI command to watch run your project:
dotnet watch run
Optionally you can watch run without hot reload enabled:
dotnet watch run --no-hot-reload
Add this instruction to the project file [ProjectName].csproj:
<ItemGroup>
<!-- extends watching group to include *.cshtml and *.razor files -->
<Watch Include="**\*.cshtml;*.razor;*.js;*.css" Exclude="**\obj\**\*;bin\**\*" />
</ItemGroup>
For further information see Microsoft DotNet 5.0 Documentation.
While searching the root cause of that issue i came accross with that SO question. To resolve that issue you need to add Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation NuGet package to your project and modify your Startup.cs like below.
Inside ConfigureServices method of Startup.cs
For ASP.NET Core MVC:
IMvcBuilder mvc = services.AddControllersWithViews();
mvc.AddRazorRuntimeCompilation();
For ASP.NET Core Razor Pages:
IMvcBuilder mvc = services.AddRazorPages();
mvc.AddRazorRuntimeCompilation();
PS: Do not forget to use this feature 'dotnet-watch' for development environment only since it does not make sense for production environment for most cases.
Source: https://learn.microsoft.com/en-us/aspnet/core/migration/22-to-30?view=aspnetcore-3.0&tabs=visual-studio
2021 UPDATE (Better Solution): You do not need to execute AddRazorRuntimeCompilation method in Startup.cs. You can keep this feature running by adding a value to projects debug configuration.
Please add new Environment variable to Project Properties > Debug > Environment variables
Name: ASPNETCORE_HOSTINGSTARTUPASSEMBLIES
Value: Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation
I'm not entirely sure this is the same issue, but ASP.NET Core 2.2 introduced an In Process IIS hosting model. This provides a lot of performance benefits in a production environment, but basically negates one of ASP.NET Core's most useful development features: automatic updates. If you're using the In Process model in development, you'll need to build after code changes, just like with older ASP.NET MVC sites. You can switch the hosting model back to the Out of Process model (the old way) either by going to your project properties or editing your csproj. In properties, there's a dropdown now on the Debug tab, which corresponds to the <AspNetCoreHostingModel> tag in the csproj.
UPDATE
For what it's worth, I've actually seen this behavior to be somewhat random actually. Some changes for me seem to kick off an automatic build like they used to, while other changes don't show up unless I manually build. There doesn't seem to be a lot of rhyme or reason to which changes required what either. It's possible that the tooling for this isn't quite there yet, and a future update to Visual Studio may make the In Process model behave consistently like the Out of Process model did with code changes in development. All I know is that switching to Out of Process definitely resolves all issues with this, so it's related to the In Process model in some way.
following OpenShift tutorials, creating a tomcat application and clone it, the local repository will contain a pom.xml, webapp folder.
What's the equivalent on a diy application that contain a diy and misc folders
Thank's in advance, any help is appreciated cause I'm really stuck here !?
Update
Well I've install a Tomcat 8 DIY application following this tutorial here everything works fine, I can see the Tomcat page in the browser, the problem is how to deploy a .war file.
For a Tomcat 6/7 application on OpenShift, the local git repository have this structure:
____Tomcat7/6
|_________ webapp
|_________ src
|_________ pom.xml
But for a Tomcat 8 DYI application I have this structure
________Tomcat8/diy
|__________ Diy
|__________ misc
|__________ readme
So Where to deploy my .war files, cause there is no webapp folder?
The title of your question suggests that you are mixing up at least 5 different, completely independent & orthogonal tools and concepts:
Git is a version control system ("push", "local repo")
Maven is a build tool ("pom.xml")
Apache Tomcat is a servlet container ("Tomcat 6/7/8")
rhc is some client tool provided by yet another cloud computing platform ("OpenShift")
Your code is the stuff that you have written, it's completely under your responsibility.
Before you start doing anything, please make sure that you have at least some basic understanding of what each of these tools does. Then ask yourself whether you really need Tomcat 8 instead of Tomcat 7, and whether a 2 year blog post about the compilation of Tomcat 8 within an OpenShift gear is the best source. All these deployment details can change pretty quickly, if it worked two years ago, it's not guaranteed that it would work now.
I've never worked with OpenShift, but as far as I understand, the basic idea is this:
You write your code
You create your OpenShift account and allocate some "Gear" (or "Dyno" or whatever...) for your application
You commit your source code (/src) and the files that are necessary for the build (pom.xml), and use git to push it to the repository OpenShift gave you.
OpenShift then uses your pom.xml and builds all the WAR-files on it's own
Then you can use your rhc client tool to start your application, if that's not done automatically.
Some of these steps can be changed.
If you really have to, you can indeed compile your own Tomcat8, the tutorial you linked tells you how (more or less. The dude who did it obviously knew what he was doing there, so he might have skipped some details that seemed trivial to him).
Furthermore, if you really want, you can deploy pre-packaged WAR-files, by deliberately removing all the stuff that is necessary to build you app (removing pom.xml and all the /src), and instead adding the packaged application to your git repo, and then pushing it all to OpenShift. Then it will skip the build step, and just run what you gave it. OpenShift seems to provide some information about this deployment strategy: https://help.openshift.com/hc/en-us/articles/202399740. Please read the documentation and make sure you understand what you want to do. For example, filter-branching your git repo and removing all source files you have ever written is not a good idea, even if you don't need these files on OpenShift.
Currently, I don't see anything of the standard tomcat directory structure in the tree that you show. Instead, there seem to be just some basic ruby-scripts or some other default-demo-app-stuff... That's why it's called "do it yourself". If you don't want this, take a standard Tomcat7 app.
I was just reading about using libraries in glassfish. That is, put jar files in a 'centralized' location so that it can be accessed from different web applications. domains-dir/lib/ext is one of such locations. I put some jar files there and restarted the server. The restart was successful but no application would load; not even the admin console. I investigated this and found the culprit to be the primefaces jar file I put. On removing it, glassfish worked properly. I've tried versions 3.1 and 3.2 of primefaces and the results are the same. On checking the server log, I find that, with primefaces in the ext folder, the class javax.faces.context.PartialViewContextFactory fails to load. Any idea what might be causing this. I should probably try the other library locations like domains-dir/lib/ but I'm curious.
By the way, I'm working on a windows 7 OS and using glassfish 3.1.1
Thanks.
Just put the libs in domains-dir/lib/.
From the Glassfish manual:
To use the Common class loader, copy the JAR files into the domain-dir/lib or as-install/lib
directory or copy the .class files (and other needed files, such as .properties files) into the
domain-dir/lib/classes directory, then restart the server.
Using the Common class loader makes an application or module accessible to all applications
or modules deployed on servers that share the same configuration.However, this accessibility
does not extend to application clients.
More information about classloading in Glassfish can be found here.