From gbraad at redhat.com Wed Nov 1 06:44:14 2017 From: gbraad at redhat.com (Gerard Braad) Date: Wed, 1 Nov 2017 14:44:14 +0800 Subject: [Devtools] minishift addon and profile In-Reply-To: References: Message-ID: On Wed, Nov 1, 2017 at 3:00 AM, Burr Sutter wrote: > MINISHIFT_ENABLE_EXPERIMENT=on ./minishift start Why are you using the experimental feature toggle in this case? From bsutter at redhat.com Wed Nov 1 12:52:15 2017 From: bsutter at redhat.com (Burr Sutter) Date: Wed, 1 Nov 2017 08:52:15 -0400 Subject: [Devtools] minishift addon and profile In-Reply-To: References: Message-ID: On Wed, Nov 1, 2017 at 2:44 AM, Gerard Braad wrote: > On Wed, Nov 1, 2017 at 3:00 AM, Burr Sutter wrote: > > MINISHIFT_ENABLE_EXPERIMENT=on ./minishift start > > > Why are you using the experimental feature toggle in this case? > I am trying to also have Service Catalog and that seems to work. I have to start flying about today, for the next 15 days, so I will be testing whenever, wherever :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Sat Nov 4 09:24:22 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sat, 4 Nov 2017 05:24:22 -0400 Subject: [Devtools] minishift addon and profile In-Reply-To: References: Message-ID: Is there another way to get the service catalog besides MINISHIFT_ENABLE_EXPERIMENT=on ? On Wed, Nov 1, 2017 at 8:52 AM, Burr Sutter wrote: > > > On Wed, Nov 1, 2017 at 2:44 AM, Gerard Braad wrote: > >> On Wed, Nov 1, 2017 at 3:00 AM, Burr Sutter wrote: >> > MINISHIFT_ENABLE_EXPERIMENT=on ./minishift start >> >> >> Why are you using the experimental feature toggle in this case? >> > > I am trying to also have Service Catalog and that seems to work. > > I have to start flying about today, for the next 15 days, so I will be > testing whenever, wherever :-) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Sat Nov 4 10:40:47 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sat, 4 Nov 2017 06:40:47 -0400 Subject: [Devtools] minishift profiles Message-ID: I am trying to make profiles work for me :-) My use case for profiles is... profile one start profile one stop profile two start profile two stop and I would like to perform this sequence during a live presentation, over a very, very slow conference wifi. It is too slow to use right now. I was hoping that a profile would "cache" everything it needed and not have to re-download things upon the next start. pulling the openshift image is the slowest part (over a very slow conference wifi) Starting OpenShift using openshift/origin:v3.7.0-rc.0 ... Pulling image openshift/origin:v3.7.0-rc.0 Pulled 1/4 layers, 26% complete Pulled 1/4 layers, 37% complete Pulled 1/4 layers, 52% complete Pulled 1/4 layers, 66% complete Pulled 2/4 layers, 82% complete Pulled 3/4 layers, 91% complete Pulled 4/4 layers, 100% complete Extracting Image pull complete OpenShift server started. My startup script is as follows: ./minishift profile set helloworldmsa ./minishift config set memory 6GB ./minishift config set cpus 2 ./minishift config set vm-driver virtualbox ./minishift addon enable admin-user ./minishift config set openshift-version v3.7.0-rc.0 ./minishift config set iso-url centos MINISHIFT_ENABLE_EXPERIMENT=on ./minishift start -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmaury at redhat.com Sat Nov 4 13:20:35 2017 From: jmaury at redhat.com (Jean-Francois Maury) Date: Sat, 4 Nov 2017 14:20:35 +0100 Subject: [Devtools] minishift profiles In-Reply-To: References: Message-ID: I did not experience the same behavior. Once a profile has been started, data is cached. Maybe the VM has been deleted after you first start it. On Sat, Nov 4, 2017 at 11:40 AM, Burr Sutter wrote: > I am trying to make profiles work for me :-) > > My use case for profiles is... > profile one start > profile one stop > profile two start > profile two stop > > and I would like to perform this sequence during a live presentation, over > a very, very slow conference wifi. > > It is too slow to use right now. I was hoping that a profile would > "cache" everything it needed and not have to re-download things upon the > next start. > > pulling the openshift image is the slowest part (over a very slow > conference wifi) > Starting OpenShift using openshift/origin:v3.7.0-rc.0 ... > Pulling image openshift/origin:v3.7.0-rc.0 > Pulled 1/4 layers, 26% complete > Pulled 1/4 layers, 37% complete > Pulled 1/4 layers, 52% complete > Pulled 1/4 layers, 66% complete > Pulled 2/4 layers, 82% complete > Pulled 3/4 layers, 91% complete > Pulled 4/4 layers, 100% complete > Extracting > Image pull complete > OpenShift server started. > > My startup script is as follows: > ./minishift profile set helloworldmsa > ./minishift config set memory 6GB > ./minishift config set cpus 2 > ./minishift config set vm-driver virtualbox > ./minishift addon enable admin-user > ./minishift config set openshift-version v3.7.0-rc.0 > ./minishift config set iso-url centos > > MINISHIFT_ENABLE_EXPERIMENT=on ./minishift start > > > > > _______________________________________________ > Devtools mailing list > Devtools at redhat.com > https://www.redhat.com/mailman/listinfo/devtools > > -- JEFF MAURY Red Hat jmaury at redhat.com @redhatjobs redhatjobs @redhatjobs -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Sun Nov 5 08:39:20 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sun, 5 Nov 2017 10:39:20 +0200 Subject: [Devtools] minishift profiles Message-ID: Trying to use the profiles for an actual demo today - a real production use case :-) one thing I have noticed about the syntax minishift profile list but minishift --profile myprofileA stop minishift --profile myprofileB start I need to "list" so I know how to stop/start but one syntax requires no "--" and the other has "--" which I find to be odd since I am using the commands back to back. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manderse at redhat.com Sun Nov 5 12:43:05 2017 From: manderse at redhat.com (Max Rydahl Andersen) Date: Sun, 05 Nov 2017 13:43:05 +0100 Subject: [Devtools] minishift profiles In-Reply-To: References: Message-ID: <60E4F64B-A004-4585-B29A-40B4E3A7D72D@redhat.com> On 5 Nov 2017, at 9:39, Burr Sutter wrote: > Trying to use the profiles for an actual demo today - a real > production use > case :-) > > one thing I have noticed about the syntax > > minishift profile list > but > minishift --profile myprofileA stop > minishift --profile myprofileB start > > I need to "list" so I know how to stop/start > > but one syntax requires no "--" and the other has "--" which I find > to be > odd since I am using the commands back to back. The first is a command operating on profiles that minishift know about: `minishift profile list` the last two are you starting/stopping the openshift cluster - and on those commands you add a flag to indicate you want it to do something outside the default. Means that the more consistent syntax would be to write: ``` minishift profile list minishift start --profile myprofileA minishift stop --profile myprofileB ``` So the first command is for you to get a list of the flags you can pass. /max -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgurung at redhat.com Sun Nov 5 13:21:38 2017 From: bgurung at redhat.com (Budh Ram Gurung) Date: Sun, 5 Nov 2017 18:51:38 +0530 Subject: [Devtools] minishift addon and profile In-Reply-To: References: Message-ID: Hi Burr, On Sat, Nov 4, 2017 at 2:54 PM, Burr Sutter wrote: > Is there another way to get the service catalog besides > MINISHIFT_ENABLE_EXPERIMENT=on > ? > AFAIR, there is no other way to get service catalog feature than this. We even don't expose the flag if the experimental env is not set: $ minishift start -h | grep service-catalog $ MINISHIFT_ENABLE_EXPERIMENTAL=on minishift start -h | grep service-catalog --service-catalog Install service catalog (experimental) > > > > On Wed, Nov 1, 2017 at 8:52 AM, Burr Sutter wrote: > >> >> >> On Wed, Nov 1, 2017 at 2:44 AM, Gerard Braad wrote: >> >>> On Wed, Nov 1, 2017 at 3:00 AM, Burr Sutter wrote: >>> > MINISHIFT_ENABLE_EXPERIMENT=on ./minishift start >>> >>> >>> Why are you using the experimental feature toggle in this case? >>> >> >> I am trying to also have Service Catalog and that seems to work. >> >> I have to start flying about today, for the next 15 days, so I will be >> testing whenever, wherever :-) >> >> >> >> > > _______________________________________________ > Devtools mailing list > Devtools at redhat.com > https://www.redhat.com/mailman/listinfo/devtools > > Regards, Budh Ram Gurung -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Sun Nov 5 15:35:42 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sun, 5 Nov 2017 17:35:42 +0200 Subject: [Devtools] minishift profiles In-Reply-To: <60E4F64B-A004-4585-B29A-40B4E3A7D72D@redhat.com> References: <60E4F64B-A004-4585-B29A-40B4E3A7D72D@redhat.com> Message-ID: I was able to switch profiles live today. One profile setup for helloworld msa another profile setup for Istio On Sun, Nov 5, 2017 at 2:43 PM, Max Rydahl Andersen wrote: > On 5 Nov 2017, at 9:39, Burr Sutter wrote: > > Trying to use the profiles for an actual demo today - a real production use > case :-) > > one thing I have noticed about the syntax > > minishift profile list > but > minishift --profile myprofileA stop > minishift --profile myprofileB start > > I need to "list" so I know how to stop/start > > but one syntax requires no "--" and the other has "--" which I find to be > odd since I am using the commands back to back. > > The first is a command operating on profiles that minishift know about: > > minishift profile list > > the last two are you starting/stopping the openshift cluster - and on > those commands you add a flag to indicate you want it to do something > outside the default. > > Means that the more consistent syntax would be to write: > > minishift profile list > minishift start --profile myprofileA > minishift stop --profile myprofileB > > So the first command is for you to get a list of the flags you can pass. > > /max > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manderse at redhat.com Sun Nov 5 16:04:02 2017 From: manderse at redhat.com (Max Rydahl Andersen) Date: Sun, 05 Nov 2017 17:04:02 +0100 Subject: [Devtools] minishift profiles In-Reply-To: References: <60E4F64B-A004-4585-B29A-40B4E3A7D72D@redhat.com> Message-ID: > I was able to switch profiles live today. > > One profile setup for helloworld msa > another profile setup for Istio So...success!!? :) /max > > On Sun, Nov 5, 2017 at 2:43 PM, Max Rydahl Andersen > > wrote: > >> On 5 Nov 2017, at 9:39, Burr Sutter wrote: >> >> Trying to use the profiles for an actual demo today - a real >> production use >> case :-) >> >> one thing I have noticed about the syntax >> >> minishift profile list >> but >> minishift --profile myprofileA stop >> minishift --profile myprofileB start >> >> I need to "list" so I know how to stop/start >> >> but one syntax requires no "--" and the other has "--" which I find >> to be >> odd since I am using the commands back to back. >> >> The first is a command operating on profiles that minishift know >> about: >> >> minishift profile list >> >> the last two are you starting/stopping the openshift cluster - and on >> those commands you add a flag to indicate you want it to do something >> outside the default. >> >> Means that the more consistent syntax would be to write: >> >> minishift profile list >> minishift start --profile myprofileA >> minishift stop --profile myprofileB >> >> So the first command is for you to get a list of the flags you can >> pass. >> >> /max >> From bsutter at redhat.com Sun Nov 5 16:15:53 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sun, 5 Nov 2017 18:15:53 +0200 Subject: [Devtools] minishift profiles In-Reply-To: References: <60E4F64B-A004-4585-B29A-40B4E3A7D72D@redhat.com> Message-ID: On Sun, Nov 5, 2017 at 6:04 PM, Max Rydahl Andersen wrote: > > I was able to switch profiles live today. >> >> One profile setup for helloworld msa >> another profile setup for Istio >> > > So...success!!? :) > > I think so I did not notice the "caching" problem that I had yesterday. it is slow to bring up a large VM with lots of JVMs running but it worked. > /max > > > >> On Sun, Nov 5, 2017 at 2:43 PM, Max Rydahl Andersen >> wrote: >> >> On 5 Nov 2017, at 9:39, Burr Sutter wrote: >>> >>> Trying to use the profiles for an actual demo today - a real production >>> use >>> case :-) >>> >>> one thing I have noticed about the syntax >>> >>> minishift profile list >>> but >>> minishift --profile myprofileA stop >>> minishift --profile myprofileB start >>> >>> I need to "list" so I know how to stop/start >>> >>> but one syntax requires no "--" and the other has "--" which I find to be >>> odd since I am using the commands back to back. >>> >>> The first is a command operating on profiles that minishift know about: >>> >>> minishift profile list >>> >>> the last two are you starting/stopping the openshift cluster - and on >>> those commands you add a flag to indicate you want it to do something >>> outside the default. >>> >>> Means that the more consistent syntax would be to write: >>> >>> minishift profile list >>> minishift start --profile myprofileA >>> minishift stop --profile myprofileB >>> >>> So the first command is for you to get a list of the flags you can pass. >>> >>> /max >>> >>> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Sun Nov 5 16:45:50 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sun, 05 Nov 2017 16:45:50 +0000 Subject: [Devtools] Fabric8 maven plug-in Message-ID: I believe the Fabric8:debug is broken. It could be that my cluster is under too much strain but the debug has been taking several minutes and I had to just give up. Has anyone else had recent success with fabric8:debug? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lmohanty at redhat.com Sun Nov 5 19:02:48 2017 From: lmohanty at redhat.com (Lalatendu Mohanty) Date: Mon, 6 Nov 2017 00:32:48 +0530 Subject: [Devtools] minishift profiles In-Reply-To: References: Message-ID: On Sun, Nov 5, 2017 at 2:09 PM, Burr Sutter wrote: > Trying to use the profiles for an actual demo today - a real production > use case :-) > > one thing I have noticed about the syntax > > minishift profile list > but > minishift --profile myprofileA stop > minishift --profile myprofileB start > > I need to "list" so I know how to stop/start > This is interesting. So you prefer minishift --profile PROFILE_NAME over minishift profile set PROFILE_NAME then running config set commands and finally just "minishift start" command. So "--profile" is a global flag which can be used to run a command against a profile if it is not an active profile. In general we expect the user to set an active profile and just run the normal minishift commands for that profile. "minishift status" command also tells you which profile you are currently using. > but one syntax requires no "--" and the other has "--" which I find to be > odd since I am using the commands back to back. > > > _______________________________________________ > Devtools mailing list > Devtools at redhat.com > https://www.redhat.com/mailman/listinfo/devtools > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Sun Nov 5 22:33:43 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sun, 05 Nov 2017 22:33:43 +0000 Subject: [Devtools] minishift profiles In-Reply-To: References: Message-ID: For initial setup then I am good with the set profile set config add-on apply But for subsequent restarts, running that same sequence yields lots of messages that look bad to the end-user. So on restarts I am using minishift ?profile whatever start The big win on profiles is the ability to start them and stop them often, on the fly, in front of a live audience. Plus, it is best to stop when putting the laptop to sleep to avoid the df cpu problem. Does a subsequent start also get the clock in the VM corrected ? On Sun, Nov 5, 2017 at 9:02 PM Lalatendu Mohanty wrote: > On Sun, Nov 5, 2017 at 2:09 PM, Burr Sutter wrote: > >> Trying to use the profiles for an actual demo today - a real production >> use case :-) >> >> one thing I have noticed about the syntax >> >> minishift profile list >> but >> minishift --profile myprofileA stop >> minishift --profile myprofileB start >> >> I need to "list" so I know how to stop/start >> > > This is interesting. So you prefer minishift --profile PROFILE_NAME over > minishift profile set PROFILE_NAME then running config set commands and > finally just "minishift start" command. > > So "--profile" is a global flag which can be used to run a command against > a profile if it is not an active profile. In general we expect the user to > set an active profile and just run the normal minishift commands for that > profile. > > "minishift status" command also tells you which profile you are currently > using. > > >> but one syntax requires no "--" and the other has "--" which I find to >> be odd since I am using the commands back to back. >> >> >> _______________________________________________ >> Devtools mailing list >> Devtools at redhat.com >> https://www.redhat.com/mailman/listinfo/devtools >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gbraad at redhat.com Mon Nov 6 00:26:09 2017 From: gbraad at redhat.com (Gerard Braad) Date: Mon, 6 Nov 2017 08:26:09 +0800 Subject: [Devtools] minishift profiles In-Reply-To: References: Message-ID: Hi, On Sun, Nov 5, 2017 at 4:39 PM, Burr Sutter wrote: > but one syntax requires no "--" and the other has "--" which I find to be > odd since I am using the commands back to back. One is a command to set the active profile for subsequent commands, while the other is a flag to perform a single action/command outside of the currently chosen or active profile. On Sun, Nov 5, 2017 at 11:35 PM, Burr Sutter wrote: > I was able to switch profiles live today. Good to hear! There is actually not much to it, we do still have some kinks to iron out. Feedback like yours will help with, as some can be as easy as better feedback during start, documentation, etc On Mon, Nov 6, 2017 at 6:33 AM, Burr Sutter wrote: > But for subsequent restarts, running that same sequence yields lots of > messages that look bad to the end-user. At the moment we provide a verbose feedback about the startup of Minishift, as we experience many reports related to missing settings or something related to being on a different distro, or just sometimes weird behaviour... We will try to reduce the output after we are a little bit more stable, and only report things when we really see a failure. This is pretty much the same as what happened for the output of `oc cluster up`, as they also dropped the verbosity after some time. > The big win on profiles is the ability to start them and stop them often, on the fly, in front of a live audience. As long as the IP address doesn't chane... as we still have an issue with how certificates are generated by `oc cluster up`. As long as there is no way to regenerate the certificates for the new IP address assigned to the VM, the startup would fail. We can work around this by forcibly assigning an address on restart, but this will only work with using the same address when the certificate was generated. I hope this can be solved.. Gerard From gbraad at redhat.com Mon Nov 6 00:37:41 2017 From: gbraad at redhat.com (Gerard Braad) Date: Mon, 6 Nov 2017 08:37:41 +0800 Subject: [Devtools] minishift addon and profile In-Reply-To: References: Message-ID: On Sun, Nov 5, 2017 at 9:21 PM, Budh Ram Gurung wrote: > On Sat, Nov 4, 2017 at 2:54 PM, Burr Sutter wrote: >> Is there another way to get the service catalog besides >> MINISHIFT_ENABLE_EXPERIMENT=on >> ? > AFAIR, there is no other way to get service catalog feature than this. > We even don't expose the flag if the experimental env is not set: > > $ minishift start -h | grep service-catalog > > $ MINISHIFT_ENABLE_EXPERIMENTAL=on minishift start -h | grep service-catalog > --service-catalog Install service catalog > (experimental) This all comes down to the following message: https://github.com/openshift/origin/blob/c21a4709fd612345e6a732874a3ccd2cbaf1eaa6/docs/cluster_up_down.md#installing-the-service-catalog We had seen several issues when deploying OpenShift with Service Catalog. So we had to decide, include it or not. We eventually decided to include it, but had to find a way to easily indicate that things might break or not work as expected. Hopefully, over time, the flag can be promoted to general use by removing this restriction. Feedback and keeping us up-to-date can surely help with this... Did service catalog deploy and work as expected? Gerard -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tech-preview.png Type: image/png Size: 7933 bytes Desc: not available URL: From jstracha at redhat.com Mon Nov 6 08:54:45 2017 From: jstracha at redhat.com (James Strachan) Date: Mon, 6 Nov 2017 00:54:45 -0800 Subject: [Devtools] Fabric8 maven plug-in In-Reply-To: References: Message-ID: any clues as to what's not working? Its pretty hard to diagnose "not working" Does the command listen on the debug port? Does the pod have the JAVA_ENABLE_DEBUG env var enabled? I've seen DeploymentConfig's be changed and that change never actually do anything on OpenShift - I wonder if you just need to manually click "Deploy" on the DeploymentConfig? On Sun, Nov 5, 2017 at 8:45 AM, Burr Sutter wrote: > I believe the Fabric8:debug is broken. > > It could be that my cluster is under too much strain but the debug has > been taking several minutes and I had to just give up. > > Has anyone else had recent success with fabric8:debug? > > _______________________________________________ > Devtools mailing list > Devtools at redhat.com > https://www.redhat.com/mailman/listinfo/devtools > > -- James ------- Red Hat Twitter: @jstrachan Email: james.strachan at gmail.com Blog: https://medium.com/@jstrachan/ fabric8: https://fabric8.io/ open source development platform -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Mon Nov 6 09:49:23 2017 From: bsutter at redhat.com (Burr Sutter) Date: Mon, 06 Nov 2017 09:49:23 +0000 Subject: [Devtools] Fabric8 maven plug-in In-Reply-To: References: Message-ID: Currently running through airports. Will check again tonight. Could it be ?memory?? My 6G minishift VM is running the 3.7rc and lots of JVMs, at least 10. free -m says that there is some memory available but I have notice that deployments hang in general until I delete some previous projects. I think fabric:debug is starting a new pod which taking a long time. On Mon, Nov 6, 2017 at 10:54 AM James Strachan wrote: > any clues as to what's not working? Its pretty hard to diagnose "not > working" > > Does the command listen on the debug port? Does the pod have the > JAVA_ENABLE_DEBUG env var enabled? I've seen DeploymentConfig's be > changed and that change never actually do anything on OpenShift - I wonder > if you just need to manually click "Deploy" on the DeploymentConfig? > > On Sun, Nov 5, 2017 at 8:45 AM, Burr Sutter wrote: > >> I believe the Fabric8:debug is broken. >> >> It could be that my cluster is under too much strain but the debug has >> been taking several minutes and I had to just give up. >> >> Has anyone else had recent success with fabric8:debug? >> >> _______________________________________________ >> Devtools mailing list >> Devtools at redhat.com >> https://www.redhat.com/mailman/listinfo/devtools >> >> > > > -- > James > ------- > Red Hat > > Twitter: @jstrachan > Email: james.strachan at gmail.com > Blog: https://medium.com/@jstrachan/ > > fabric8: https://fabric8.io/ > open source development platform > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeepto at redhat.com Mon Nov 6 12:52:29 2017 From: pradeepto at redhat.com (Pradeepto Bhattacharya) Date: Mon, 6 Nov 2017 18:22:29 +0530 Subject: [Devtools] Fabric8 maven plug-in In-Reply-To: References: Message-ID: + Hrishikesh On Mon, Nov 6, 2017 at 3:19 PM, Burr Sutter wrote: > Currently running through airports. Will check again tonight. > > Could it be ?memory?? My 6G minishift VM is running the 3.7rc and lots of > JVMs, at least 10. > > free -m says that there is some memory available but I have notice that > deployments hang in general until I delete some previous projects. > > I think fabric:debug is starting a new pod which taking a long time. > > On Mon, Nov 6, 2017 at 10:54 AM James Strachan > wrote: > >> any clues as to what's not working? Its pretty hard to diagnose "not >> working" >> >> Does the command listen on the debug port? Does the pod have the >> JAVA_ENABLE_DEBUG env var enabled? I've seen DeploymentConfig's be >> changed and that change never actually do anything on OpenShift - I wonder >> if you just need to manually click "Deploy" on the DeploymentConfig? >> >> On Sun, Nov 5, 2017 at 8:45 AM, Burr Sutter wrote: >> >>> I believe the Fabric8:debug is broken. >>> >>> It could be that my cluster is under too much strain but the debug has >>> been taking several minutes and I had to just give up. >>> >>> Has anyone else had recent success with fabric8:debug? >>> >>> _______________________________________________ >>> Devtools mailing list >>> Devtools at redhat.com >>> https://www.redhat.com/mailman/listinfo/devtools >>> >>> >> >> >> -- >> James >> ------- >> Red Hat >> >> Twitter: @jstrachan >> Email: james.strachan at gmail.com >> Blog: https://medium.com/@jstrachan/ >> >> fabric8: https://fabric8.io/ >> open source development platform >> > > _______________________________________________ > Devtools mailing list > Devtools at redhat.com > https://www.redhat.com/mailman/listinfo/devtools > > -- Pradeepto Bhattacharya -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Mon Nov 6 19:20:21 2017 From: bsutter at redhat.com (Burr Sutter) Date: Mon, 6 Nov 2017 21:20:21 +0200 Subject: [Devtools] Fabric8 maven plug-in In-Reply-To: References: Message-ID: made it to hotel room and horrible internet connectivity but trying to make this work even fabric8:deploy now presents the end-user will lots of these messages [INFO] F8: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container \"sti-build\" in pod \"microspringboot3-s2i-1-build\" is waiting to start: PodInitializing","reason":"BadRequest","code":400} [INFO] Current reconnect backoff is 1000 milliseconds (T0) [INFO] Current reconnect backoff is 2000 milliseconds (T1) [INFO] Current reconnect backoff is 4000 milliseconds (T2) [INFO] Current reconnect backoff is 8000 milliseconds (T3) [INFO] Current reconnect backoff is 16000 milliseconds (T4) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) [INFO] Current reconnect backoff is 32000 milliseconds (T5) On Mon, Nov 6, 2017 at 2:52 PM, Pradeepto Bhattacharya wrote: > + Hrishikesh > > On Mon, Nov 6, 2017 at 3:19 PM, Burr Sutter wrote: > >> Currently running through airports. Will check again tonight. >> >> Could it be ?memory?? My 6G minishift VM is running the 3.7rc and lots of >> JVMs, at least 10. >> >> free -m says that there is some memory available but I have notice that >> deployments hang in general until I delete some previous projects. >> >> I think fabric:debug is starting a new pod which taking a long time. >> >> On Mon, Nov 6, 2017 at 10:54 AM James Strachan >> wrote: >> >>> any clues as to what's not working? Its pretty hard to diagnose "not >>> working" >>> >>> Does the command listen on the debug port? Does the pod have the >>> JAVA_ENABLE_DEBUG env var enabled? I've seen DeploymentConfig's be >>> changed and that change never actually do anything on OpenShift - I wonder >>> if you just need to manually click "Deploy" on the DeploymentConfig? >>> >>> On Sun, Nov 5, 2017 at 8:45 AM, Burr Sutter wrote: >>> >>>> I believe the Fabric8:debug is broken. >>>> >>>> It could be that my cluster is under too much strain but the debug has >>>> been taking several minutes and I had to just give up. >>>> >>>> Has anyone else had recent success with fabric8:debug? >>>> >>>> _______________________________________________ >>>> Devtools mailing list >>>> Devtools at redhat.com >>>> https://www.redhat.com/mailman/listinfo/devtools >>>> >>>> >>> >>> >>> -- >>> James >>> ------- >>> Red Hat >>> >>> Twitter: @jstrachan >>> Email: james.strachan at gmail.com >>> Blog: https://medium.com/@jstrachan/ >>> >>> fabric8: https://fabric8.io/ >>> open source development platform >>> >> >> _______________________________________________ >> Devtools mailing list >> Devtools at redhat.com >> https://www.redhat.com/mailman/listinfo/devtools >> >> > > > -- > Pradeepto Bhattacharya > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Mon Nov 6 20:25:32 2017 From: bsutter at redhat.com (Burr Sutter) Date: Mon, 6 Nov 2017 22:25:32 +0200 Subject: [Devtools] Fabric8 maven plug-in In-Reply-To: References: Message-ID: and this is the project that I have tried to debug https://github.com/redhat-developer-demos/microspringboot1/blob/master/pom.xml#L80 if you wait a few minutes, it sometimes works. so, i did demo this live (might have been a different java project, can't remember) at JavaOne but skipped it for JavaDay as it was too unreliable. still unreliable. perhaps something mis-configured in that project and I need to find another one. On Mon, Nov 6, 2017 at 9:20 PM, Burr Sutter wrote: > made it to hotel room and horrible internet connectivity but trying to > make this work > > even fabric8:deploy now presents the end-user will lots of these messages > > [INFO] F8: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container > \"sti-build\" in pod \"microspringboot3-s2i-1-build\" is waiting to > start: PodInitializing","reason":"BadRequest","code":400} > > [INFO] Current reconnect backoff is 1000 milliseconds (T0) > > [INFO] Current reconnect backoff is 2000 milliseconds (T1) > > [INFO] Current reconnect backoff is 4000 milliseconds (T2) > > [INFO] Current reconnect backoff is 8000 milliseconds (T3) > > [INFO] Current reconnect backoff is 16000 milliseconds (T4) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > [INFO] Current reconnect backoff is 32000 milliseconds (T5) > > On Mon, Nov 6, 2017 at 2:52 PM, Pradeepto Bhattacharya < > pradeepto at redhat.com> wrote: > >> + Hrishikesh >> >> On Mon, Nov 6, 2017 at 3:19 PM, Burr Sutter wrote: >> >>> Currently running through airports. Will check again tonight. >>> >>> Could it be ?memory?? My 6G minishift VM is running the 3.7rc and lots >>> of JVMs, at least 10. >>> >>> free -m says that there is some memory available but I have notice that >>> deployments hang in general until I delete some previous projects. >>> >>> I think fabric:debug is starting a new pod which taking a long time. >>> >>> On Mon, Nov 6, 2017 at 10:54 AM James Strachan >>> wrote: >>> >>>> any clues as to what's not working? Its pretty hard to diagnose "not >>>> working" >>>> >>>> Does the command listen on the debug port? Does the pod have the >>>> JAVA_ENABLE_DEBUG env var enabled? I've seen DeploymentConfig's be >>>> changed and that change never actually do anything on OpenShift - I wonder >>>> if you just need to manually click "Deploy" on the DeploymentConfig? >>>> >>>> On Sun, Nov 5, 2017 at 8:45 AM, Burr Sutter wrote: >>>> >>>>> I believe the Fabric8:debug is broken. >>>>> >>>>> It could be that my cluster is under too much strain but the debug has >>>>> been taking several minutes and I had to just give up. >>>>> >>>>> Has anyone else had recent success with fabric8:debug? >>>>> >>>>> _______________________________________________ >>>>> Devtools mailing list >>>>> Devtools at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/devtools >>>>> >>>>> >>>> >>>> >>>> -- >>>> James >>>> ------- >>>> Red Hat >>>> >>>> Twitter: @jstrachan >>>> Email: james.strachan at gmail.com >>>> Blog: https://medium.com/@jstrachan/ >>>> >>>> fabric8: https://fabric8.io/ >>>> open source development platform >>>> >>> >>> _______________________________________________ >>> Devtools mailing list >>> Devtools at redhat.com >>> https://www.redhat.com/mailman/listinfo/devtools >>> >>> >> >> >> -- >> Pradeepto Bhattacharya >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hshinde at redhat.com Tue Nov 7 10:10:45 2017 From: hshinde at redhat.com (Hrishikesh Shinde) Date: Tue, 7 Nov 2017 15:40:45 +0530 Subject: [Devtools] Fabric8 maven plug-in In-Reply-To: References: Message-ID: Hi Burr, Thanks for reporting the issue. As per the observations, f8-m-p deployment and debugging functionalities (may be other things) are not stable with OpenShift v3.7.0-rc.0 while most of the things working well with OpenShift 3.6.x. Might be things are breaking mostly due to API definition changes in Kubernetes and OpenShift territory. Work for supporting f8-m-p for 1.8 and OpenShift 3.7 is in progress.[1] [1] https://github.com/fabric8io/fabric8-maven-plugin/pull/1067 On Tue, Nov 7, 2017 at 1:55 AM, Burr Sutter wrote: > and this is the project that I have tried to debug > https://github.com/redhat-developer-demos/microspringboot1/ > blob/master/pom.xml#L80 > > if you wait a few minutes, it sometimes works. > > so, i did demo this live (might have been a different java project, can't > remember) at JavaOne > but skipped it for JavaDay as it was too unreliable. > > still unreliable. > > perhaps something mis-configured in that project and I need to find > another one. > > On Mon, Nov 6, 2017 at 9:20 PM, Burr Sutter wrote: > >> made it to hotel room and horrible internet connectivity but trying to >> make this work >> >> even fabric8:deploy now presents the end-user will lots of these messages >> >> [INFO] F8: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container >> \"sti-build\" in pod \"microspringboot3-s2i-1-build\" is waiting to >> start: PodInitializing","reason":"BadRequest","code":400} >> >> [INFO] Current reconnect backoff is 1000 milliseconds (T0) >> >> [INFO] Current reconnect backoff is 2000 milliseconds (T1) >> >> [INFO] Current reconnect backoff is 4000 milliseconds (T2) >> >> [INFO] Current reconnect backoff is 8000 milliseconds (T3) >> >> [INFO] Current reconnect backoff is 16000 milliseconds (T4) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >> >> On Mon, Nov 6, 2017 at 2:52 PM, Pradeepto Bhattacharya < >> pradeepto at redhat.com> wrote: >> >>> + Hrishikesh >>> >>> On Mon, Nov 6, 2017 at 3:19 PM, Burr Sutter wrote: >>> >>>> Currently running through airports. Will check again tonight. >>>> >>>> Could it be ?memory?? My 6G minishift VM is running the 3.7rc and lots >>>> of JVMs, at least 10. >>>> >>>> free -m says that there is some memory available but I have notice that >>>> deployments hang in general until I delete some previous projects. >>>> >>>> I think fabric:debug is starting a new pod which taking a long time. >>>> >>>> On Mon, Nov 6, 2017 at 10:54 AM James Strachan >>>> wrote: >>>> >>>>> any clues as to what's not working? Its pretty hard to diagnose "not >>>>> working" >>>>> >>>>> Does the command listen on the debug port? Does the pod have the >>>>> JAVA_ENABLE_DEBUG env var enabled? I've seen DeploymentConfig's be >>>>> changed and that change never actually do anything on OpenShift - I wonder >>>>> if you just need to manually click "Deploy" on the DeploymentConfig? >>>>> >>>>> On Sun, Nov 5, 2017 at 8:45 AM, Burr Sutter >>>>> wrote: >>>>> >>>>>> I believe the Fabric8:debug is broken. >>>>>> >>>>>> It could be that my cluster is under too much strain but the debug >>>>>> has been taking several minutes and I had to just give up. >>>>>> >>>>>> Has anyone else had recent success with fabric8:debug? >>>>>> >>>>>> _______________________________________________ >>>>>> Devtools mailing list >>>>>> Devtools at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/devtools >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> James >>>>> ------- >>>>> Red Hat >>>>> >>>>> Twitter: @jstrachan >>>>> Email: james.strachan at gmail.com >>>>> Blog: https://medium.com/@jstrachan/ >>>>> >>>>> fabric8: https://fabric8.io/ >>>>> open source development platform >>>>> >>>> >>>> _______________________________________________ >>>> Devtools mailing list >>>> Devtools at redhat.com >>>> https://www.redhat.com/mailman/listinfo/devtools >>>> >>>> >>> >>> >>> -- >>> Pradeepto Bhattacharya >>> >>> >> > -- Hrishikesh | +91 7276 342274 | IRC: hshinde -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Fri Nov 10 14:23:38 2017 From: bsutter at redhat.com (Burr Sutter) Date: Fri, 10 Nov 2017 14:23:38 +0000 Subject: [Devtools] Fabric8 maven plug-in In-Reply-To: References: Message-ID: Thank you On Tue, Nov 7, 2017 at 12:11 PM Hrishikesh Shinde wrote: > Hi Burr, > Thanks for reporting the issue. > As per the observations, f8-m-p deployment and debugging functionalities > (may be other things) are not stable with OpenShift v3.7.0-rc.0 while most > of the things working well with OpenShift 3.6.x. > Might be things are breaking mostly due to API definition changes in > Kubernetes and OpenShift territory. > > Work for supporting f8-m-p for 1.8 and OpenShift 3.7 is in progress.[1] > > [1] https://github.com/fabric8io/fabric8-maven-plugin/pull/1067 > > > > > On Tue, Nov 7, 2017 at 1:55 AM, Burr Sutter wrote: > >> and this is the project that I have tried to debug >> >> https://github.com/redhat-developer-demos/microspringboot1/blob/master/pom.xml#L80 >> >> if you wait a few minutes, it sometimes works. >> >> so, i did demo this live (might have been a different java project, can't >> remember) at JavaOne >> but skipped it for JavaDay as it was too unreliable. >> >> still unreliable. >> >> perhaps something mis-configured in that project and I need to find >> another one. >> >> On Mon, Nov 6, 2017 at 9:20 PM, Burr Sutter wrote: >> >>> made it to hotel room and horrible internet connectivity but trying to >>> make this work >>> >>> even fabric8:deploy now presents the end-user will lots of these messages >>> >>> [INFO] F8: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container >>> \"sti-build\" in pod \"microspringboot3-s2i-1-build\" is waiting to start: >>> PodInitializing","reason":"BadRequest","code":400} >>> >>> [INFO] Current reconnect backoff is 1000 milliseconds (T0) >>> >>> [INFO] Current reconnect backoff is 2000 milliseconds (T1) >>> >>> [INFO] Current reconnect backoff is 4000 milliseconds (T2) >>> >>> [INFO] Current reconnect backoff is 8000 milliseconds (T3) >>> >>> [INFO] Current reconnect backoff is 16000 milliseconds (T4) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> [INFO] Current reconnect backoff is 32000 milliseconds (T5) >>> >>> On Mon, Nov 6, 2017 at 2:52 PM, Pradeepto Bhattacharya < >>> pradeepto at redhat.com> wrote: >>> >>>> + Hrishikesh >>>> >>>> On Mon, Nov 6, 2017 at 3:19 PM, Burr Sutter wrote: >>>> >>>>> Currently running through airports. Will check again tonight. >>>>> >>>>> Could it be ?memory?? My 6G minishift VM is running the 3.7rc and lots >>>>> of JVMs, at least 10. >>>>> >>>>> free -m says that there is some memory available but I have notice >>>>> that deployments hang in general until I delete some previous projects. >>>>> >>>>> I think fabric:debug is starting a new pod which taking a long time. >>>>> >>>>> On Mon, Nov 6, 2017 at 10:54 AM James Strachan >>>>> wrote: >>>>> >>>>>> any clues as to what's not working? Its pretty hard to diagnose "not >>>>>> working" >>>>>> >>>>>> Does the command listen on the debug port? Does the pod have the >>>>>> JAVA_ENABLE_DEBUG env var enabled? I've seen DeploymentConfig's be >>>>>> changed and that change never actually do anything on OpenShift - I wonder >>>>>> if you just need to manually click "Deploy" on the DeploymentConfig? >>>>>> >>>>>> On Sun, Nov 5, 2017 at 8:45 AM, Burr Sutter >>>>>> wrote: >>>>>> >>>>>>> I believe the Fabric8:debug is broken. >>>>>>> >>>>>>> It could be that my cluster is under too much strain but the debug >>>>>>> has been taking several minutes and I had to just give up. >>>>>>> >>>>>>> Has anyone else had recent success with fabric8:debug? >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Devtools mailing list >>>>>>> Devtools at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/devtools >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> James >>>>>> ------- >>>>>> Red Hat >>>>>> >>>>>> Twitter: @jstrachan >>>>>> Email: james.strachan at gmail.com >>>>>> Blog: https://medium.com/@jstrachan/ >>>>>> >>>>>> fabric8: https://fabric8.io/ >>>>>> open source development platform >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Devtools mailing list >>>>> Devtools at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/devtools >>>>> >>>>> >>>> >>>> >>>> -- >>>> Pradeepto Bhattacharya >>>> >>>> >>> >> > > > -- > Hrishikesh | +91 7276 342274 | IRC: hshinde > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Wed Nov 15 00:06:30 2017 From: bsutter at redhat.com (Burr Sutter) Date: Tue, 14 Nov 2017 19:06:30 -0500 Subject: [Devtools] minishift profiles In-Reply-To: References: Message-ID: One gotcha with using "profile set" as the first statement is the user gets an off-putting message ./minishift profile set istio-demo oc cli context could not changed for 'istio-demo'. Make sure the profile is in running state or restart if the problem persists. the problem with this message is that I have several more commands to run before I run "start", so stop complaining to me that I have not yet ran start. On Sun, Nov 5, 2017 at 7:26 PM, Gerard Braad wrote: > Hi, > > > On Sun, Nov 5, 2017 at 4:39 PM, Burr Sutter wrote: > > but one syntax requires no "--" and the other has "--" which I find to > be > > odd since I am using the commands back to back. > > One is a command to set the active profile for subsequent commands, > while the other is a flag to perform a single action/command outside > of the currently chosen or active profile. > > On Sun, Nov 5, 2017 at 11:35 PM, Burr Sutter wrote: > > I was able to switch profiles live today. > > Good to hear! There is actually not much to it, we do still have some > kinks to iron out. > Feedback like yours will help with, as some can be as easy as better > feedback during start, documentation, etc > > On Mon, Nov 6, 2017 at 6:33 AM, Burr Sutter wrote: > > But for subsequent restarts, running that same sequence yields lots of > > messages that look bad to the end-user. > > At the moment we provide a verbose feedback about the startup of > Minishift, as we experience many reports related to missing settings > or something related to being on a different distro, or just sometimes > weird behaviour... > > We will try to reduce the output after we are a little bit more > stable, and only report things when we really see a failure. This is > pretty much the same as what happened for the output of `oc cluster > up`, as they also dropped the verbosity after some time. > > > The big win on profiles is the ability to start them and stop them > often, on the fly, in front of a live audience. > > As long as the IP address doesn't chane... as we still have an issue > with how certificates are generated by `oc cluster up`. As long as > there is no way to regenerate the certificates for the new IP address > assigned to the VM, the startup would fail. We can work around this by > forcibly assigning an address on restart, but this will only work with > using the same address when the certificate was generated. > > I hope this can be solved.. > > > Gerard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Wed Nov 15 14:45:14 2017 From: bsutter at redhat.com (Burr Sutter) Date: Wed, 15 Nov 2017 09:45:14 -0500 Subject: [Devtools] Fwd: [openshift-sme] Blog - minishift setup on Mac OS In-Reply-To: References: Message-ID: good feedback on our docs ---------- Forwarded message ---------- From: Ritesh Shah Date: Wed, Nov 15, 2017 at 2:59 AM Subject: [openshift-sme] Blog - minishift setup on Mac OS To: openshift-sme at redhat.com Hi, Pls. Find my blog on mini shift Setup on Mac OS. I had some permissions issues while setting up mini shift on my Mac book and hence thought of documenting the same. https://rshah16.blogspot.com/2017/11/deploy-openshift-on-macos.html Cheers! Ritesh Have a question? First, check the FAQ: https://pnt.redhat.com/pnt/p- 734673/openshift-con...-Jun-2017.pdf Next, check the archives: http://post-office.corp. redhat.com/archives/openshift-sme/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lmohanty at redhat.com Wed Nov 15 19:01:58 2017 From: lmohanty at redhat.com (Lalatendu Mohanty) Date: Thu, 16 Nov 2017 00:31:58 +0530 Subject: [Devtools] Fwd: [openshift-sme] Blog - minishift setup on Mac OS In-Reply-To: References: Message-ID: On Wed, Nov 15, 2017 at 8:15 PM, Burr Sutter wrote: > good feedback on our docs > > Hi Burr, The blog actually contains wrong information. I have already pointed that to the author. -Lala > ---------- Forwarded message ---------- > From: Ritesh Shah > Date: Wed, Nov 15, 2017 at 2:59 AM > Subject: [openshift-sme] Blog - minishift setup on Mac OS > To: openshift-sme at redhat.com > > > Hi, > > Pls. Find my blog on mini shift Setup on Mac OS. I had some permissions > issues while setting up mini shift on my Mac book and hence thought of > documenting the same. > > https://rshah16.blogspot.com/2017/11/deploy-openshift-on-macos.html > > Cheers! > Ritesh > > Have a question? > First, check the FAQ: https://pnt.redhat.com/pnt/p-7 > 34673/openshift-con...-Jun-2017.pdf > Next, check the archives: http://post-office.corp.redhat > .com/archives/openshift-sme/ > > > _______________________________________________ > Devtools mailing list > Devtools at redhat.com > https://www.redhat.com/mailman/listinfo/devtools > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Wed Nov 15 21:24:35 2017 From: bsutter at redhat.com (Burr Sutter) Date: Wed, 15 Nov 2017 16:24:35 -0500 Subject: [Devtools] minishift addon and profile In-Reply-To: References: Message-ID: On Sun, Nov 5, 2017 at 7:37 PM, Gerard Braad wrote: > > > On Sun, Nov 5, 2017 at 9:21 PM, Budh Ram Gurung > wrote: > > On Sat, Nov 4, 2017 at 2:54 PM, Burr Sutter wrote: > >> Is there another way to get the service catalog besides > >> MINISHIFT_ENABLE_EXPERIMENT=on > >> ? > > AFAIR, there is no other way to get service catalog feature than this. > > We even don't expose the flag if the experimental env is not set: > > > > $ minishift start -h | grep service-catalog > > > > $ MINISHIFT_ENABLE_EXPERIMENTAL=on minishift start -h | grep > service-catalog > > --service-catalog Install service catalog > > (experimental) > > This all comes down to the following message: > https://github.com/openshift/origin/blob/c21a4709fd612345e6a732874a3ccd > 2cbaf1eaa6/docs/cluster_up_down.md#installing-the-service-catalog > > > We had seen several issues when deploying OpenShift with Service Catalog. > So we had to decide, include it or not. We eventually decided to include > it, but had to find a way to easily indicate that things might break or not > work as expected. Hopefully, over time, the flag can be promoted to general > use by removing this restriction. > Feedback and keeping us up-to-date can surely help with this... > > Did service catalog deploy and work as expected? > As of today (nov 15) here is what I am using: ./minishift profile set istio-demo ./minishift config set memory 8GB ./minishift config set cpus 2 ./minishift config set vm-driver virtualbox ./minishift addon enable admin-user ./minishift config set openshift-version v3.7.0-rc.0 ./minishift start and that seems to give me the service catalog and my initial test (launching a mysql database) worked great. I am now focused on getting more Istio demos working on that same setup. https://docs.google.com/document/d/1FDd80Ye6mXcPjq9DiwKjVlNyzuYHUYbOUVYKVbnXp7s/edit?usp=sharing > > Gerard > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tech-preview.png Type: image/png Size: 7933 bytes Desc: not available URL: From bsutter at redhat.com Thu Nov 16 23:56:46 2017 From: bsutter at redhat.com (Burr Sutter) Date: Thu, 16 Nov 2017 18:56:46 -0500 Subject: [Devtools] minishift 20 pod limit Message-ID: I finally figured out why I had pods that would not deploy, there is a 20 pod max on minishift. Has anyone seen a way to tweak that limit? And can you update it on a live system? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Fri Nov 17 14:37:21 2017 From: bsutter at redhat.com (Burr Sutter) Date: Fri, 17 Nov 2017 09:37:21 -0500 Subject: [Devtools] [openshift-sme] minishift 20 pod limit In-Reply-To: References: Message-ID: On Thu, Nov 16, 2017 at 10:06 PM, Hugo Guerrero wrote: > Hi > > Burr, the default size of the minishift VM is just 1 core and 4GB ram, you > will run out of pods easily as it will support max 20 pods. You can > increase the cores associated with the VM and the memory to allow more pods > to be deployed. I just added 4 cores instead of 1 and was able to deploy > all my pods. > There is still a fixed 20 pod limit for the Node as demonstrated by Graham's url to the docs. oc describe node allows you to see all the pods and the fact that I am at the limit. I am just not sure how to change the variable. > > > *Hugo* > > On Thu, Nov 16, 2017 at 6:56 PM, Burr Sutter wrote: > >> I finally figured out why I had pods that would not deploy, there is a 20 >> pod max on minishift. >> >> Has anyone seen a way to tweak that limit? >> >> And can you update it on a live system? >> >> >> >> >> Have a question? >> First, check the FAQ: https://pnt.redhat.com/pnt/p-7 >> 34673/openshift-con...-Jun-2017.pdf >> Next, check the archives: http://post-office.corp.redhat >> .com/archives/openshift-sme/ >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Fri Nov 17 16:33:28 2017 From: bsutter at redhat.com (Burr Sutter) Date: Fri, 17 Nov 2017 11:33:28 -0500 Subject: [Devtools] [openshift-sme] minishift 20 pod limit In-Reply-To: References: Message-ID: On Fri, Nov 17, 2017 at 10:25 AM, Hugo Guerrero wrote: > Well, so minishift ships with the default values for both documented > kubeArguments. The important one is the pods-per-core as "10" is the > default value. So if you start your minishift with just 2 cores that's the > max ammount of pods you will get as the "the lower of the two limits the > number of pods on a node". No matter how many max-pods you put, you still > we limited by pods-per-core. So if you add more cores you will get more > pods available up to "max-pods". Or just change the pods-per-core and still > use the same amount of cpus. > > The editing place is on /var/lib/minishift/openshift.local.config/node-localhost/node-config.yaml > unless you modified it in the start command. > I see that file on my minishift but line do I edit? I do not see a pods-per-core entry and once edited, do I need to "restart" something to get the setting to take effect. > > *Hugo* > > On Fri, Nov 17, 2017 at 9:37 AM, Burr Sutter wrote: > >> >> >> On Thu, Nov 16, 2017 at 10:06 PM, Hugo Guerrero >> wrote: >> >>> Hi >>> >>> Burr, the default size of the minishift VM is just 1 core and 4GB ram, >>> you will run out of pods easily as it will support max 20 pods. You can >>> increase the cores associated with the VM and the memory to allow more pods >>> to be deployed. I just added 4 cores instead of 1 and was able to deploy >>> all my pods. >>> >> >> There is still a fixed 20 pod limit for the Node as demonstrated by >> Graham's url to the docs. >> >> oc describe node allows you to see all the pods and the fact that I am at >> the limit. >> >> I am just not sure how to change the variable. >> >> >> >>> >>> >>> *Hugo* >>> >>> On Thu, Nov 16, 2017 at 6:56 PM, Burr Sutter wrote: >>> >>>> I finally figured out why I had pods that would not deploy, there is a >>>> 20 pod max on minishift. >>>> >>>> Has anyone seen a way to tweak that limit? >>>> >>>> And can you update it on a live system? >>>> >>>> >>>> >>>> >>>> Have a question? >>>> First, check the FAQ: https://pnt.redhat.com/pnt/p-7 >>>> 34673/openshift-con...-Jun-2017.pdf >>>> Next, check the archives: http://post-office.corp.redhat >>>> .com/archives/openshift-sme/ >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsutter at redhat.com Mon Nov 20 03:06:42 2017 From: bsutter at redhat.com (Burr Sutter) Date: Sun, 19 Nov 2017 22:06:42 -0500 Subject: [Devtools] [openshift-sme] minishift 20 pod limit In-Reply-To: References: Message-ID: It is easier to just add a CPU to get 30 pods. With the Istio components taking 9 pods, it is pretty easy to hit the 20 limit :-) On Fri, Nov 17, 2017 at 11:46 PM, Andrew Block wrote: > Burr, > > Heres a one liner that will modify the pods per core value (30 in this use > case) in minishift > > minishift ssh "sudo sed -i '/kubeletArguments/ a \ pods-per-core:' > /var/lib/minishift/openshift.local.config/node-localhost/node-config.yaml > && sudo sed -i '/pods-per-core/ a \ - \"30\"' > /var/lib/minishift/openshift.local.config/node-localhost/node-config.yaml > && sudo docker restart origin" > > > > On Fri, Nov 17, 2017 at 11:11 AM, Hugo Guerrero > wrote: > >> I believe options are not in the file by defaul, but look for >> "kubeArguments" section and add them there. >> >> >> *Hugo* >> >> On Fri, Nov 17, 2017 at 11:33 AM, Burr Sutter wrote: >> >>> >>> >>> On Fri, Nov 17, 2017 at 10:25 AM, Hugo Guerrero >>> wrote: >>> >>>> Well, so minishift ships with the default values for both documented >>>> kubeArguments. The important one is the pods-per-core as "10" is the >>>> default value. So if you start your minishift with just 2 cores that's the >>>> max ammount of pods you will get as the "the lower of the two limits >>>> the number of pods on a node". No matter how many max-pods you put, you >>>> still we limited by pods-per-core. So if you add more cores you will get >>>> more pods available up to "max-pods". Or just change the pods-per-core and >>>> still use the same amount of cpus. >>>> >>>> The editing place is on /var/lib/minishift/openshif >>>> t.local.config/node-localhost/node-config.yaml unless you modified it >>>> in the start command. >>>> >>> >>> I see that file on my minishift but line do I edit? >>> I do not see a pods-per-core entry >>> >>> and once edited, do I need to "restart" something to get the setting to >>> take effect. >>> >>> >>>> >>>> *Hugo* >>>> >>>> On Fri, Nov 17, 2017 at 9:37 AM, Burr Sutter >>>> wrote: >>>> >>>>> >>>>> >>>>> On Thu, Nov 16, 2017 at 10:06 PM, Hugo Guerrero >>>>> wrote: >>>>> >>>>>> Hi >>>>>> >>>>>> Burr, the default size of the minishift VM is just 1 core and 4GB >>>>>> ram, you will run out of pods easily as it will support max 20 pods. You >>>>>> can increase the cores associated with the VM and the memory to allow more >>>>>> pods to be deployed. I just added 4 cores instead of 1 and was able to >>>>>> deploy all my pods. >>>>>> >>>>> >>>>> There is still a fixed 20 pod limit for the Node as demonstrated by >>>>> Graham's url to the docs. >>>>> >>>>> oc describe node allows you to see all the pods and the fact that I am >>>>> at the limit. >>>>> >>>>> I am just not sure how to change the variable. >>>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>> *Hugo* >>>>>> >>>>>> On Thu, Nov 16, 2017 at 6:56 PM, Burr Sutter >>>>>> wrote: >>>>>> >>>>>>> I finally figured out why I had pods that would not deploy, there is >>>>>>> a 20 pod max on minishift. >>>>>>> >>>>>>> Has anyone seen a way to tweak that limit? >>>>>>> >>>>>>> And can you update it on a live system? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Have a question? >>>>>>> First, check the FAQ: https://pnt.redhat.com/pnt/p-7 >>>>>>> 34673/openshift-con...-Jun-2017.pdf >>>>>>> Next, check the archives: http://post-office.corp.redhat >>>>>>> .com/archives/openshift-sme/ >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> >> Have a question? >> First, check the FAQ: https://pnt.redhat.com/pnt/p-7 >> 34673/openshift-con...-Jun-2017.pdf >> Next, check the archives: http://post-office.corp.redhat >> .com/archives/openshift-sme/ >> > > > > -- > Andrew Block > Principal Consultant | Red Hat Consulting > 101 N. Wacker, Suite 150 > > Chicago, IL 60606 > > andrew.block at redhat.com | m. (716) 870-2408 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdrage at redhat.com Tue Nov 21 15:05:28 2017 From: cdrage at redhat.com (Charlie Drage) Date: Tue, 21 Nov 2017 10:05:28 -0500 Subject: [Devtools] OpenShift Online Down? (People are probably already aware of this) Message-ID: <20171121150526.GA14@f188861d143d> I saw this in reddit this morning: https://www.reddit.com/r/redhat/comments/7e94tn/openshift_online_down/ I'm assuming people are already aware of it? May be good from someone on the team to reply to the thread :) P.S. Ignore me if this has already been mentioned, lot's of emails flew by this week. -- Charlie Drage Software Engineer Red Hat (Canada) From bsutter at redhat.com Thu Nov 30 13:34:28 2017 From: bsutter at redhat.com (Burr Sutter) Date: Thu, 30 Nov 2017 08:34:28 -0500 Subject: [Devtools] Fwd: [openshift-sme] Maven plugin fabric8 - fabric8:watch on Windows - ERROR In-Reply-To: References: Message-ID: FYI The Maven Plugin is one of our most critical tool offerings as it is the primary solution for RHOAR and all things microservices. ---------- Forwarded message ---------- From: Roland Huss Date: Thu, Nov 30, 2017 at 6:19 AM Subject: Re: [openshift-sme] Maven plugin fabric8 - fabric8:watch on Windows - ERROR To: Mattia Mascia Cc: openshift-sme Thanks Mattia for investigating this issue ! Actually the canonical repo is https://github.com/ fabric8io/fabric8-maven-plugin , it would be awesome if you open an issue there. I'm currently not directly connected to the fabric8-maven-plugin anymore, but there are good guys behind this project now. thanks ... ... roland On Thu, Nov 30, 2017 at 11:53 AM Mattia Mascia wrote: > Hi guys, > > I found the reason why this happen and I will open an pull request on > https://github.com/rhuss/fabric8-maven-plugin > > The issue is on the > *io.fabric8.maven.generator.javaexec.FatJarDetector.java* on the *scan* > method. > > It never closes the jar file once it read it. > > @@ -59,8 +59,7 @@ > long maxSize = 0; > for (String jarOrWar : jarOrWars) { > File archiveFile = new File(directory, jarOrWar); > - try { > - JarFile archive = new JarFile(archiveFile); > + try (JarFile archive = new JarFile(archiveFile)){ > Manifest mf = archive.getManifest(); > Attributes mainAttributes = mf.getMainAttributes(); > if (mainAttributes != null) { > > > Best > > Mattia > > On Wed, Nov 29, 2017 at 10:37 AM, Mattia Mascia > wrote: > >> Hi SME, >> >> Anyone experience the following error using fabric8 plugin on Windows ? >> Mac and Linux works fine. >> >> It looks like a race condition on the target jar and the mvn process is >> the only one is try to access no other external processes are touching the >> jar. >> >> [ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.5.33:build >> (default) on project app-sample: Execution default of goal >> io.fabric8:fabric8-maven-plugin:3.5.33:build failed: Cannot extract >> generator config: org.apache.maven.plugin.MojoExecutionException: Failed >> to add devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >> java.io.IOException: Failed to delete original file >> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' -> >> [Help 1] >> >> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to >> execute goal io.fabric8:fabric8-maven-plugin:3.5.33:build (default) on >> project app-sample: Execution default of goal io.fabric8:fabric8-maven-plugin:3.5.33:build >> failed: Cannot extract generator config: org.apache.maven.plugin.MojoExecutionException: >> Failed to add devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >> java.io.IOException: Failed to delete original file >> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:224) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:153) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:145) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor. >> executeForkedExecutions(MojoExecutor.java:364) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:198) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:153) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:145) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor. >> executeForkedExecutions(MojoExecutor.java:364) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:198) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:153) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:145) >> >> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. >> buildProject(LifecycleModuleBuilder.java:108) >> >> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder. >> buildProject(LifecycleModuleBuilder.java:76) >> >> at org.apache.maven.lifecycle.internal.builder.singlethreaded. >> SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) >> >> at org.apache.maven.lifecycle.internal.LifecycleStarter. >> execute(LifecycleStarter.java:116) >> >> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361) >> >> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) >> >> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) >> >> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213) >> >> at org.apache.maven.cli.MavenCli.main(MavenCli.java:157) >> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> >> at sun.reflect.NativeMethodAccessorImpl.invoke( >> NativeMethodAccessorImpl.java:62) >> >> at sun.reflect.DelegatingMethodAccessorImpl.invoke( >> DelegatingMethodAccessorImpl.java:43) >> >> at java.lang.reflect.Method.invoke(Method.java:498) >> >> at org.codehaus.plexus.classworlds.launcher.Launcher. >> launchEnhanced(Launcher.java:289) >> >> at org.codehaus.plexus.classworlds.launcher.Launcher. >> launch(Launcher.java:229) >> >> at org.codehaus.plexus.classworlds.launcher.Launcher. >> mainWithExitCode(Launcher.java:415) >> >> at org.codehaus.plexus.classworlds.launcher.Launcher. >> main(Launcher.java:356) >> >> Caused by: org.apache.maven.plugin.PluginExecutionException: Execution >> default of goal io.fabric8:fabric8-maven-plugin:3.5.33:build failed: >> Cannot extract generator config: org.apache.maven.plugin.MojoExecutionException: >> Failed to add devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >> java.io.IOException: Failed to delete original file >> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >> >> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo( >> DefaultBuildPluginManager.java:144) >> >> at org.apache.maven.lifecycle.internal.MojoExecutor.execute( >> MojoExecutor.java:208) >> >> ... 27 more >> >> Caused by: java.lang.IllegalArgumentException: Cannot extract generator >> config: org.apache.maven.plugin.MojoExecutionException: Failed to add >> devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >> java.io.IOException: Failed to delete original file >> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >> >> at io.fabric8.maven.plugin.mojo.build.BuildMojo. >> customizeConfig(BuildMojo.java:297) >> >> at io.fabric8.maven.docker.config.ConfigHelper. >> resolveImages(ConfigHelper.java:51) >> >> at io.fabric8.maven.docker.AbstractDockerMojo. >> initImageConfiguration(AbstractDockerMojo.java:308) >> >> at io.fabric8.maven.docker.AbstractDockerMojo.execute( >> AbstractDockerMojo.java:215) >> >> at io.fabric8.maven.plugin.mojo.build.BuildMojo.execute( >> BuildMojo.java:193) >> >> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo( >> DefaultBuildPluginManager.java:133) >> >> ... 28 more >> >> Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to add >> devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >> java.io.IOException: Failed to delete original file >> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >> >> at io.fabric8.maven.generator.springboot.SpringBootGenerator.ad >> dDevToolsFilesToFatJar(SpringBootGenerator.java:151) >> >> at io.fabric8.maven.generator.springboot. >> SpringBootGenerator.customize(SpringBootGenerator.java:86) >> >> at io.fabric8.maven.plugin.generator.GeneratorManager. >> generate(GeneratorManager.java:62) >> >> at io.fabric8.maven.plugin.mojo.build.BuildMojo. >> customizeConfig(BuildMojo.java:295) >> >> ... 33 more >> >> Caused by: java.io.IOException: Failed to delete original file >> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >> >> at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:2578) >> >> at io.fabric8.maven.generator.springboot.SpringBootGenerator.co >> pyFilesToFatJar(SpringBootGenerator.java:169) >> >> at io.fabric8.maven.generator.springboot.SpringBootGenerator.ad >> dDevToolsFilesToFatJar(SpringBootGenerator.java:149) >> >> Few questions: >> >> - Is https://github.com/rhuss/fabric8-maven-plugin the right place >> to open the issue ? >> - Who support the plugin and do we have dedicate team here? >> >> Thanks a lot >> >> Mattia >> >> -- >> >> MATTIA MASCIA >> >> SENIOR CONSULTANT >> >> Red Hat Switzerland >> >> mmascia at redhat.com M: +41 79 41 14 377 <+41794114377> >> >> > > > > -- > > MATTIA MASCIA > > SENIOR CONSULTANT > > Red Hat Switzerland > > mmascia at redhat.com M: +41 79 41 14 377 <+41794114377> > > Have a question? > First, check the FAQ: https://pnt.redhat.com/pnt/p- > 734673/openshift-con...-Jun-2017.pdf > Next, check the archives: http://post-office.corp. > redhat.com/archives/openshift-sme/ Have a question? First, check the FAQ: https://pnt.redhat.com/pnt/p- 734673/openshift-con...-Jun-2017.pdf Next, check the archives: http://post-office.corp. redhat.com/archives/openshift-sme/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hshinde at redhat.com Thu Nov 30 14:00:44 2017 From: hshinde at redhat.com (Hrishikesh Shinde) Date: Thu, 30 Nov 2017 19:30:44 +0530 Subject: [Devtools] Fwd: [openshift-sme] Maven plugin fabric8 - fabric8:watch on Windows - ERROR In-Reply-To: References: Message-ID: Thanks Burr, for sharing it. We have filed an issue for this https://github.com/fabric8io/fabric8-maven-plugin/issues/1118 to keep it on our radar. On Thu, Nov 30, 2017 at 7:04 PM, Burr Sutter wrote: > FYI > > The Maven Plugin is one of our most critical tool offerings as it is the > primary solution for RHOAR and all things microservices. > > > ---------- Forwarded message ---------- > From: Roland Huss > Date: Thu, Nov 30, 2017 at 6:19 AM > Subject: Re: [openshift-sme] Maven plugin fabric8 - fabric8:watch on > Windows - ERROR > To: Mattia Mascia > Cc: openshift-sme > > > Thanks Mattia for investigating this issue ! > > Actually the canonical repo is https://github.com/fabric8i > o/fabric8-maven-plugin , it would be awesome if you open an issue there. > > I'm currently not directly connected to the fabric8-maven-plugin anymore, > but there are good guys behind this project now. > > thanks ... > ... roland > > On Thu, Nov 30, 2017 at 11:53 AM Mattia Mascia wrote: > >> Hi guys, >> >> I found the reason why this happen and I will open an pull request on >> https://github.com/rhuss/fabric8-maven-plugin >> >> The issue is on the >> *io.fabric8.maven.generator.javaexec.FatJarDetector.java* on the *scan* >> method. >> >> It never closes the jar file once it read it. >> >> @@ -59,8 +59,7 @@ >> long maxSize = 0; >> for (String jarOrWar : jarOrWars) { >> File archiveFile = new File(directory, jarOrWar); >> - try { >> - JarFile archive = new JarFile(archiveFile); >> + try (JarFile archive = new JarFile(archiveFile)){ >> Manifest mf = archive.getManifest(); >> Attributes mainAttributes = mf.getMainAttributes(); >> if (mainAttributes != null) { >> >> >> Best >> >> Mattia >> >> On Wed, Nov 29, 2017 at 10:37 AM, Mattia Mascia >> wrote: >> >>> Hi SME, >>> >>> Anyone experience the following error using fabric8 plugin on Windows ? >>> Mac and Linux works fine. >>> >>> It looks like a race condition on the target jar and the mvn process is >>> the only one is try to access no other external processes are touching the >>> jar. >>> >>> [ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.5.33:build >>> (default) on project app-sample: Execution default of goal >>> io.fabric8:fabric8-maven-plugin:3.5.33:build failed: Cannot extract >>> generator config: org.apache.maven.plugin.MojoExecutionException: Failed >>> to add devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >>> java.io.IOException: Failed to delete original file >>> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >>> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' -> >>> [Help 1] >>> >>> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to >>> execute goal io.fabric8:fabric8-maven-plugin:3.5.33:build (default) on >>> project app-sample: Execution default of goal io.fabric8:fabric8-maven-plugin:3.5.33:build >>> failed: Cannot extract generator config: org.apache.maven.plugin.MojoExecutionException: >>> Failed to add devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >>> java.io.IOException: Failed to delete original file >>> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >>> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:224) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:153) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:145) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.executeFork >>> edExecutions(MojoExecutor.java:364) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:198) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:153) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:145) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.executeFork >>> edExecutions(MojoExecutor.java:364) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:198) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:153) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:145) >>> >>> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.b >>> uildProject(LifecycleModuleBuilder.java:108) >>> >>> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.b >>> uildProject(LifecycleModuleBuilder.java:76) >>> >>> at org.apache.maven.lifecycle.internal.builder.singlethreaded.S >>> ingleThreadedBuilder.build(SingleThreadedBuilder.java:51) >>> >>> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute >>> (LifecycleStarter.java:116) >>> >>> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:36 >>> 1) >>> >>> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) >>> >>> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) >>> >>> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213) >>> >>> at org.apache.maven.cli.MavenCli.main(MavenCli.java:157) >>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> >>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce >>> ssorImpl.java:62) >>> >>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe >>> thodAccessorImpl.java:43) >>> >>> at java.lang.reflect.Method.invoke(Method.java:498) >>> >>> at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnha >>> nced(Launcher.java:289) >>> >>> at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Lau >>> ncher.java:229) >>> >>> at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithEx >>> itCode(Launcher.java:415) >>> >>> at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launc >>> her.java:356) >>> >>> Caused by: org.apache.maven.plugin.PluginExecutionException: Execution >>> default of goal io.fabric8:fabric8-maven-plugin:3.5.33:build failed: >>> Cannot extract generator config: org.apache.maven.plugin.MojoExecutionException: >>> Failed to add devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >>> java.io.IOException: Failed to delete original file >>> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >>> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >>> >>> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMoj >>> o(DefaultBuildPluginManager.java:144) >>> >>> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(Moj >>> oExecutor.java:208) >>> >>> ... 27 more >>> >>> Caused by: java.lang.IllegalArgumentException: Cannot extract generator >>> config: org.apache.maven.plugin.MojoExecutionException: Failed to add >>> devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >>> java.io.IOException: Failed to delete original file >>> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >>> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >>> >>> at io.fabric8.maven.plugin.mojo.build.BuildMojo.customizeConfig >>> (BuildMojo.java:297) >>> >>> at io.fabric8.maven.docker.config.ConfigHelper.resolveImages(Co >>> nfigHelper.java:51) >>> >>> at io.fabric8.maven.docker.AbstractDockerMojo.initImageConfigur >>> ation(AbstractDockerMojo.java:308) >>> >>> at io.fabric8.maven.docker.AbstractDockerMojo.execute(AbstractD >>> ockerMojo.java:215) >>> >>> at io.fabric8.maven.plugin.mojo.build.BuildMojo.execute(BuildMo >>> jo.java:193) >>> >>> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMoj >>> o(DefaultBuildPluginManager.java:133) >>> >>> ... 28 more >>> >>> Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to >>> add devtools files to fat jar C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar. >>> java.io.IOException: Failed to delete original file >>> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >>> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >>> >>> at io.fabric8.maven.generator.springboot.SpringBootGenerator.ad >>> dDevToolsFilesToFatJar(SpringBootGenerator.java:151) >>> >>> at io.fabric8.maven.generator.springboot.SpringBootGenerator.cu >>> stomize(SpringBootGenerator.java:86) >>> >>> at io.fabric8.maven.plugin.generator.GeneratorManager.generate( >>> GeneratorManager.java:62) >>> >>> at io.fabric8.maven.plugin.mojo.build.BuildMojo.customizeConfig >>> (BuildMojo.java:295) >>> >>> ... 33 more >>> >>> Caused by: java.io.IOException: Failed to delete original file >>> 'C:\tmp\jenkins-cicd\target\app-sample-1.0-SNAPSHOT.jar' after copy to >>> 'C:\TEMP\p251228\app-sample-1.0-SNAPSHOT.jar6705264538840591172.tmp' >>> >>> at org.apache.commons.io.FileUtils.moveFile(FileUtils.java:2578) >>> >>> at io.fabric8.maven.generator.springboot.SpringBootGenerator.co >>> pyFilesToFatJar(SpringBootGenerator.java:169) >>> >>> at io.fabric8.maven.generator.springboot.SpringBootGenerator.ad >>> dDevToolsFilesToFatJar(SpringBootGenerator.java:149) >>> >>> Few questions: >>> >>> - Is https://github.com/rhuss/fabric8-maven-plugin the right place >>> to open the issue ? >>> - Who support the plugin and do we have dedicate team here? >>> >>> Thanks a lot >>> >>> Mattia >>> >>> -- >>> >>> MATTIA MASCIA >>> >>> SENIOR CONSULTANT >>> >>> Red Hat Switzerland >>> >>> mmascia at redhat.com M: +41 79 41 14 377 <+41794114377> >>> >>> >> >> >> >> -- >> >> MATTIA MASCIA >> >> SENIOR CONSULTANT >> >> Red Hat Switzerland >> >> mmascia at redhat.com M: +41 79 41 14 377 <+41794114377> >> >> Have a question? >> First, check the FAQ: https://pnt.redhat.com/pnt/p-7 >> 34673/openshift-con...-Jun-2017.pdf >> Next, check the archives: http://post-office.corp.redhat >> .com/archives/openshift-sme/ > > > Have a question? > First, check the FAQ: https://pnt.redhat.com/pnt/p-7 > 34673/openshift-con...-Jun-2017.pdf > Next, check the archives: http://post-office.corp.redhat > .com/archives/openshift-sme/ > > > _______________________________________________ > Devtools mailing list > Devtools at redhat.com > https://www.redhat.com/mailman/listinfo/devtools > > -- Hrishikesh | +91 7276 342274 | IRC: hshinde -------------- next part -------------- An HTML attachment was scrubbed... URL: