To celebrate the release of GATK 4.0, we are giving away free credits for running the GATK4 Best Practices pipelines in FireCloud, our secure online analysis portal. It’s first come first serve, so sign up now to claim your free credits worth $250. Sponsored by Google Cloud. Learn more at https://software.broadinstitute.org/firecloud/documentation/freecredits

[cromwell 28_2] MySQL database issue: Specified key was too long; max key length is 1000 bytes

Hello,
I encountered the database error below in trying to run wdl using cromwell and MySql database.
Would you give any advice on it?

Reason: liquibase.exception.DatabaseException: Specified key was too long; max key length is 1000 bytes [Failed SQL: ALTER TABLE byoo_dev.BACKEND_KV_STORE ADD CONSTRAINT UK_BACKEND_KV_STORE_KEY UNIQUE (WORKFLOW_EXECUTION_UUID, CALL_FQN, JOB_SCATTER_INDEX, JOB_RETRY_ATTEMPT, STORE_KEY)]

Longer log is as follows:

[2017-08-01 10:10:53,58] [warn] modifyDataType will lose primary key/autoincrement/not null settings for mysql.  Use <sql> and re-specify all configuration if this is the case
[2017-08-01 10:10:53,58] [warn] modifyDataType will lose primary key/autoincrement/not null settings for mysql.  Use <sql> and re-specify all configuration if this is the case
[2017-08-01 10:10:57,05] [error] changelog.xml: changesets/backend_KV_Store.xml::backend_KV_store_job_key_constraint::rmunshi: Change Set changesets/backend_KV_Store.xml::backend_KV_store_job_key_constraint::rmunshi failed.  Error: Specified key was too long; max key length is 1000 bytes [Failed SQL: ALTER TABLE byoo_dev.BACKEND_KV_STORE ADD CONSTRAINT UK_BACKEND_KV_STORE_KEY UNIQUE (WORKFLOW_EXECUTION_UUID, CALL_FQN, JOB_SCATTER_INDEX, JOB_RETRY_ATTEMPT, STORE_KEY)]
java.lang.ExceptionInInitializerError
        at cromwell.server.CromwellRootActor.workflowStoreActor$lzycompute(CromwellRootActor.scala:59)-5] [akka.actor.ActorSystemImpl(cromwell-system)] Uncaught error from thread [cromwell-system-akka.dispatchers.engine-dispatcher-5] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled
        at cromwell.server.CromwellRootActor.workflowStoreActor(CromwellRootActor.scala:59)
        at cromwell.engine.workflow.SingleWorkflowRunnerActor$$anonfun$1.applyOrElse(SingleWorkflowRunnerActor.scala:55)
        at cromwell.engine.workflow.SingleWorkflowRunnerActor$$anonfun$1.applyOrElse(SingleWorkflowRunnerActor.scala:52)
        at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
        at akka.actor.FSM$class.processEvent(FSM.scala:663)
        at cromwell.engine.workflow.SingleWorkflowRunnerActor.akka$actor$LoggingFSM$$super$processEvent(SingleWorkflowRunnerActor.scala:37)
        at akka.actor.LoggingFSM$class.processEvent(FSM.scala:799)
        at cromwell.engine.workflow.SingleWorkflowRunnerActor.processEvent(SingleWorkflowRunnerActor.scala:37)
        at akka.actor.FSM$class.akka$actor$FSM$$processMsg(FSM.scala:657)
        at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:651)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
        at akka.actor.ActorCell.invoke(ActorCell.scala:495)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
        at akka.dispatch.Mailbox.run(Mailbox.scala:224)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: liquibase.exception.MigrationFailedException: Migration failed for change set changesets/backend_KV_Store.xml::backend_KV_store_job_key_constraint::rmunshi:
     Reason: liquibase.exception.DatabaseException: Specified key was too long; max key length is 1000 bytes [Failed SQL: ALTER TABLE byoo_dev.BACKEND_KV_STORE ADD CONSTRAINT UK_BACKEND_KV_STORE_KEY UNIQUE (WORKFLOW_EXECUTION_UUID, CALL_FQN, JOB_SCATTER_INDEX, JOB_RETRY_ATTEMPT, STORE_KEY)]
        at liquibase.changelog.ChangeSet.execute(ChangeSet.java:605)
        at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:51)
        at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:79)
        at liquibase.Liquibase.update(Liquibase.java:214)
        at liquibase.Liquibase.update(Liquibase.java:192)
        at cromwell.database.migration.liquibase.LiquibaseUtils$.updateSchema(LiquibaseUtils.scala:55)
        at cromwell.database.migration.liquibase.LiquibaseUtils$.updateSchema(LiquibaseUtils.scala:28)
        at cromwell.services.ServicesStore$EnhancedSqlDatabase$$anonfun$initialized$extension$2.apply(ServicesStore.scala:18)
        at cromwell.services.ServicesStore$EnhancedSqlDatabase$$anonfun$initialized$extension$2.apply(ServicesStore.scala:18)
        at cromwell.database.slick.SlickDatabase$$anonfun$withConnection$1.apply(SlickDatabase.scala:96)
        at cromwell.database.slick.SlickDatabase$$anonfun$withConnection$1.apply(SlickDatabase.scala:96)
        at slick.jdbc.SimpleJdbcAction.run(StreamingInvokerAction.scala:70)
        at slick.jdbc.SimpleJdbcAction.run(StreamingInvokerAction.scala:69)
        at slick.basic.BasicBackend$DatabaseDef$$anon$2.liftedTree1$1(BasicBackend.scala:240)
        at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:240)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: liquibase.exception.DatabaseException: Specified key was too long; max key length is 1000 bytes [Failed SQL: ALTER TABLE byoo_dev.BACKEND_KV_STORE ADD CONSTRAINT UK_BACKEND_KV_STORE_KEY UNIQUE (WORKFLOW_EXECUTION_UUID, CALL_FQN, JOB_SCATTER_INDEX, JOB_RETRY_ATTEMPT, STORE_KEY)]
        at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:301)
        at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
        at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:107)
        at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1273)
        at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1255)
        at liquibase.changelog.ChangeSet.execute(ChangeSet.java:568)
        ... 17 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 1000 bytes
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
        at com.mysql.jdbc.Util.getInstance(Util.java:408)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:943)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2527)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2680)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2486)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2444)
        at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:845)
        at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:745)
        at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
        at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
        at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:299)
        ... 22 more
Tagged:

Best Answer

Answers

  • Thanks for your advice, @kshakir. I am using MySQL 5.5 and it seems that innodb_large_prefix is undefined.
    I'll try to find a way to use later release of MySQL.

  • It runs successfully with MySQL 5.7. Thanks!

  • Hmm, using MySQL is very underperforming and it even appears to be hanging.
    Do you have any idea what the issues are? Difference in configuration is as follows

    database {
      profile = "slick.jdbc.MySQLProfile$"
      db {
        driver = "com.mysql.jdbc.Driver"
        url = "jdbc:mysql://database:11212/cromwell?&useSSL=false&rewriteBatchedStatements=true"
        user = "user"
        password = "pwd"
        connectionTimeout = 5000
      }
    }
    call-caching {
      enabled = true
      invalidate-bad-cache-results = true
    }
    
  • kshakirkshakir Broadie, Dev

    Hi @dayzcool,

    Without details I'm not sure what you mean by underperforming and hanging. For example, cromwell uses combinations of polling and queueing when communicating with the database, that adds some latency to when jobs will start (2 seconds by default). However if you're talking strictly with regards to profiling the database and observing internal MySQL issues, that's a very different issue.

    In very wide terms, there are a number of knobs one can tweak in this database.db stanza for the slick/hikari/jdbc drivers, but what the settings should be are all highly dependent on the usage pattern of cromwell. Adjusting the settings in one way or another can increase/decrease thread contention while also affecting increased/decreased memory usage. In general, we recommend that heavy users profile their specific use to see what issues they're running into, and then adjust as necessary.

  • @kshakir, sorry I should have written clearly. I executed a workflow running GATK tools (https://github.com/broadinstitute/gatk/blob/master/scripts/mutect2_wdl/mutect2_pon.wdl).
    With MySQL backend and call-caching configuration, the workflow makes incredibly slow progress in performing tasks. cromwell spends very long time before it starts to submit jobs and, at some point in the middle of running the workflow, it isn't clear what cromwell is up to; it doesn't submit jobs or doesn't write anything to log while MySQL is idle. It is doing something per CPU usage though. Unfortunately, I didn't have time to look into the issue further. Without the configuration, it runs fine.

  • I will take a close look at the JDBC backend code when I will try it again. Thank you!

  • ChrisLChrisL Cambridge, MAMember, Broadie, Moderator, Dev
    @dayzcool which Cromwell version are you using? A lot of changes to make the call caching more efficient have been added in recent versions so if possible I'd choose Cromwell 28 (the latest release).

  • @ChrisL, I used the latest release, cromwell 28_2. Is there any way to make the log more verbose? Or, will it be helpful to understand the issue if I would take a thread dump?

  • ChrisLChrisL Cambridge, MAMember, Broadie, Moderator, Dev

    Hmm, that's interesting. I'm a little out of my depth here since I've never seen or heard of that configuration value before. If you say it works with the new MySQL without that configuration option, then I would run it like that

Sign In or Register to comment.