Questions for Confluence license has expired.

Please purchase a new license to continue using Questions for Confluence.

Avoiding errors when sprint is updated while issues are in queue

 
1
0
-1

I have a Jira Cloud <> Jira Cloud connection. I am following the documentation very closely, and the connection works well for the most part, but whenever a sprint is closed and a new sprint is opened, issues which were in the queue will end up still having that now closed sprint as their sprint id. This causes an error when the sync happens as it is trying to put the issue into a closed sprint. Is there a way to avoid this issue in my script? I am currently following the guide (https://docs.exalate.com/docs/how-to-sync-sprints) exactly.

  1. Support

    Hi,

    May you please share the full error stack trace?

    Regards,

    Harold Oconitrillo

  2. Jonathon Sisson

    Hey, thanks for getting back to me! Unfortunately I just cleared out the errors that happened from this sprint before I made this post, so I will have to wait until next week to get you that stack trace. The error is something along the lines of "cannot assign an issue to a closed sprint".

  3. Jonathon Sisson

    Here is the error stack trace. The issue happens any time that an issue is in the sync queue while the sprint is closed. It seems that the provided script does not account for the local sprint being closed, only the remote sprint being closed. Is there a modification I can make to account for the local sprint being closed as well?


    • Error Stack Trace: jcloudnode.services.jcloud.exception.UpdateIssueJiraCloudException: Could not update issue `TCP-16` with id `26781`: Issue can be assigned only active or future sprints.. at jcloudnode.services.node.JCloudTrackerExceptionCategoryService.generateUpdateIssueJiraCloudTrackerException(JCloudTrackerExceptionCategoryService.scala:422) at jcloudnode.services.jcloud.transport.JCloudClient.$anonfun$updateIssue$5(JCloudClient.scala:811) at jcloudnode.services.jcloud.transport.JCloudRestErrorHandlingService$$anonfun$recoverFromRestExceptionToJiraCloudTrackerExceptionOrBugException$1.applyOrElse(JCloudRestErrorHandlingService.scala:185) at jcloudnode.services.jcloud.transport.JCloudRestErrorHandlingService$$anonfun$recoverFromRestExceptionToJiraCloudTrackerExceptionOrBugException$1.applyOrElse(JCloudRestErrorHandlingService.scala:178) at scala.concurrent.Future.$anonfun$recoverWith$1(Future.scala:417) at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63) at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85) at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:100) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48) at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) Caused by: com.exalate.domain.exception.issuetracker.BadRequestTrackerRestException: Issue can be assigned only active or future sprints.. at jcloudnode.services.jcloud.transport.JCloudRestErrorHandlingService$.handleTrackerFourZeroZeroAndHigherRestResponse(JCloudRestErrorHandlingService.scala:89) at jcloudnode.services.jcloud.transport.JCloudRestErrorHandlingService$.filterTrackerRestErrorResponse(JCloudRestErrorHandlingService.scala:79) at jcloudnode.services.jcloud.transport.JCloudClient.$anonfun$updateIssue$3(JCloudClient.scala:804) at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307) ... 14 more
CommentAdd your comment...

1 answer

  1.  
    1
    0
    -1

    Using this snippet I have been able to at least check if the sprint is currently closed which helped a lot with dealing with this issue:

    def remoteSprintId = replica.customFields.Sprint?.value?.find { it.state.toUpperCase() != "CLOSED" }?.id
    if(remoteSprintId){
      
            def localSprintId = nodeHelper.getLocalIssueKeyFromRemoteId(remoteSprintId, "sprint")?.id
            if(localSprintId){
                def currentState = httpClient.get("/rest/agile/1.0/sprint/"+localSprintId)?.state?.toLowerCase()
    if(currentState && currentState != "closed"){
                    issue.customFields.Sprint.value = localSprintId
                }
            }       
        }
      CommentAdd your comment...